00:00:00.000 Started by upstream project "autotest-per-patch" build number 131943 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.080 Using shallow fetch with depth 1 00:00:00.080 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.080 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.133 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.133 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.708 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.721 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.733 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:03.733 > git config core.sparsecheckout # timeout=10 00:00:03.745 > git read-tree -mu HEAD # timeout=10 00:00:03.762 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:03.785 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:03.785 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:03.896 [Pipeline] Start of Pipeline 00:00:03.912 [Pipeline] library 00:00:03.914 Loading library shm_lib@master 00:00:03.914 Library shm_lib@master is cached. Copying from home. 00:00:03.932 [Pipeline] node 00:00:03.939 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.941 [Pipeline] { 00:00:03.953 [Pipeline] catchError 00:00:03.955 [Pipeline] { 00:00:03.969 [Pipeline] wrap 00:00:03.979 [Pipeline] { 00:00:03.989 [Pipeline] stage 00:00:03.991 [Pipeline] { (Prologue) 00:00:04.011 [Pipeline] echo 00:00:04.014 Node: VM-host-SM38 00:00:04.021 [Pipeline] cleanWs 00:00:04.034 [WS-CLEANUP] Deleting project workspace... 00:00:04.034 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.041 [WS-CLEANUP] done 00:00:04.247 [Pipeline] setCustomBuildProperty 00:00:04.329 [Pipeline] httpRequest 00:00:04.704 [Pipeline] echo 00:00:04.706 Sorcerer 10.211.164.101 is alive 00:00:04.715 [Pipeline] retry 00:00:04.717 [Pipeline] { 00:00:04.730 [Pipeline] httpRequest 00:00:04.735 HttpMethod: GET 00:00:04.735 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.736 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.738 Response Code: HTTP/1.1 200 OK 00:00:04.739 Success: Status code 200 is in the accepted range: 200,404 00:00:04.739 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.025 [Pipeline] } 00:00:05.042 [Pipeline] // retry 00:00:05.049 [Pipeline] sh 00:00:05.334 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.350 [Pipeline] httpRequest 00:00:06.012 [Pipeline] echo 00:00:06.014 Sorcerer 10.211.164.101 is alive 00:00:06.022 [Pipeline] retry 00:00:06.024 [Pipeline] { 00:00:06.035 [Pipeline] httpRequest 00:00:06.038 HttpMethod: GET 00:00:06.039 URL: http://10.211.164.101/packages/spdk_bfbfb6d81df2b30fd36d82707d65379c232889d1.tar.gz 00:00:06.040 Sending request to url: http://10.211.164.101/packages/spdk_bfbfb6d81df2b30fd36d82707d65379c232889d1.tar.gz 00:00:06.047 Response Code: HTTP/1.1 200 OK 00:00:06.048 Success: Status code 200 is in the accepted range: 200,404 00:00:06.049 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_bfbfb6d81df2b30fd36d82707d65379c232889d1.tar.gz 00:00:53.519 [Pipeline] } 00:00:53.538 [Pipeline] // retry 00:00:53.547 [Pipeline] sh 00:00:53.836 + tar --no-same-owner -xf spdk_bfbfb6d81df2b30fd36d82707d65379c232889d1.tar.gz 00:00:57.153 [Pipeline] sh 00:00:57.435 + git -C spdk log --oneline -n5 00:00:57.436 bfbfb6d81 util: handle events for fd type eventfd 00:00:57.436 c761dc1b3 util: Extended options for spdk_fd_group_add 00:00:57.436 22022919e nvme: enable interrupts for pcie nvme devices 00:00:57.436 cabdbcb5f nvme: Add transport interface to enable interrupts 00:00:57.436 806a0d0dc env_dpdk: new interfaces for pci device multi interrupt 00:00:57.456 [Pipeline] writeFile 00:00:57.474 [Pipeline] sh 00:00:57.760 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.773 [Pipeline] sh 00:00:58.059 + cat autorun-spdk.conf 00:00:58.059 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.059 SPDK_RUN_ASAN=1 00:00:58.059 SPDK_RUN_UBSAN=1 00:00:58.059 SPDK_TEST_RAID=1 00:00:58.059 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.068 RUN_NIGHTLY=0 00:00:58.070 [Pipeline] } 00:00:58.084 [Pipeline] // stage 00:00:58.101 [Pipeline] stage 00:00:58.103 [Pipeline] { (Run VM) 00:00:58.118 [Pipeline] sh 00:00:58.403 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.403 + echo 'Start stage prepare_nvme.sh' 00:00:58.403 Start stage prepare_nvme.sh 00:00:58.403 + [[ -n 10 ]] 00:00:58.403 + disk_prefix=ex10 00:00:58.403 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:58.403 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:58.403 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:58.403 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.403 ++ SPDK_RUN_ASAN=1 00:00:58.403 ++ SPDK_RUN_UBSAN=1 00:00:58.403 ++ SPDK_TEST_RAID=1 00:00:58.403 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.403 ++ RUN_NIGHTLY=0 00:00:58.403 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:58.403 + nvme_files=() 00:00:58.403 + declare -A nvme_files 00:00:58.403 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.403 + nvme_files['nvme.img']=5G 00:00:58.403 + nvme_files['nvme-cmb.img']=5G 00:00:58.403 + nvme_files['nvme-multi0.img']=4G 00:00:58.403 + nvme_files['nvme-multi1.img']=4G 00:00:58.403 + nvme_files['nvme-multi2.img']=4G 00:00:58.403 + nvme_files['nvme-openstack.img']=8G 00:00:58.403 + nvme_files['nvme-zns.img']=5G 00:00:58.403 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.403 + (( SPDK_TEST_FTL == 1 )) 00:00:58.403 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.403 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:00:58.403 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:00:58.403 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:00:58.403 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:00:58.403 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:00:58.403 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.403 + for nvme in "${!nvme_files[@]}" 00:00:58.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:00:58.404 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.664 + for nvme in "${!nvme_files[@]}" 00:00:58.664 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:00:58.664 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.665 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:00:58.665 + echo 'End stage prepare_nvme.sh' 00:00:58.665 End stage prepare_nvme.sh 00:00:58.677 [Pipeline] sh 00:00:58.963 + DISTRO=fedora39 00:00:58.963 + CPUS=10 00:00:58.963 + RAM=12288 00:00:58.963 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.963 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -H -a -v -f fedora39 00:00:58.963 00:00:58.963 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:58.963 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:58.963 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:58.963 HELP=0 00:00:58.963 DRY_RUN=0 00:00:58.963 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img, 00:00:58.963 NVME_DISKS_TYPE=nvme,nvme, 00:00:58.963 NVME_AUTO_CREATE=0 00:00:58.963 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img, 00:00:58.963 NVME_CMB=,, 00:00:58.963 NVME_PMR=,, 00:00:58.963 NVME_ZNS=,, 00:00:58.963 NVME_MS=,, 00:00:58.963 NVME_FDP=,, 00:00:58.963 SPDK_VAGRANT_DISTRO=fedora39 00:00:58.963 SPDK_VAGRANT_VMCPU=10 00:00:58.963 SPDK_VAGRANT_VMRAM=12288 00:00:58.963 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.963 SPDK_VAGRANT_HTTP_PROXY= 00:00:58.963 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.963 SPDK_OPENSTACK_NETWORK=0 00:00:58.963 VAGRANT_PACKAGE_BOX=0 00:00:58.963 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.963 FORCE_DISTRO=true 00:00:58.963 VAGRANT_BOX_VERSION= 00:00:58.963 EXTRA_VAGRANTFILES= 00:00:58.963 NIC_MODEL=e1000 00:00:58.963 00:00:58.963 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:58.963 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:01.511 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.083 ==> default: Creating image (snapshot of base box volume). 00:01:02.083 ==> default: Creating domain with the following settings... 00:01:02.083 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730280940_80b4edfcb45e4ccc9bbc 00:01:02.083 ==> default: -- Domain type: kvm 00:01:02.083 ==> default: -- Cpus: 10 00:01:02.083 ==> default: -- Feature: acpi 00:01:02.083 ==> default: -- Feature: apic 00:01:02.083 ==> default: -- Feature: pae 00:01:02.083 ==> default: -- Memory: 12288M 00:01:02.083 ==> default: -- Memory Backing: hugepages: 00:01:02.083 ==> default: -- Management MAC: 00:01:02.083 ==> default: -- Loader: 00:01:02.083 ==> default: -- Nvram: 00:01:02.083 ==> default: -- Base box: spdk/fedora39 00:01:02.083 ==> default: -- Storage pool: default 00:01:02.084 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730280940_80b4edfcb45e4ccc9bbc.img (20G) 00:01:02.084 ==> default: -- Volume Cache: default 00:01:02.084 ==> default: -- Kernel: 00:01:02.084 ==> default: -- Initrd: 00:01:02.084 ==> default: -- Graphics Type: vnc 00:01:02.084 ==> default: -- Graphics Port: -1 00:01:02.084 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.084 ==> default: -- Graphics Password: Not defined 00:01:02.084 ==> default: -- Video Type: cirrus 00:01:02.084 ==> default: -- Video VRAM: 9216 00:01:02.084 ==> default: -- Sound Type: 00:01:02.084 ==> default: -- Keymap: en-us 00:01:02.084 ==> default: -- TPM Path: 00:01:02.084 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.084 ==> default: -- Command line args: 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.084 ==> default: -> value=-drive, 00:01:02.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-0-drive0, 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.084 ==> default: -> value=-drive, 00:01:02.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.084 ==> default: -> value=-drive, 00:01:02.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.084 ==> default: -> value=-drive, 00:01:02.084 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:02.084 ==> default: -> value=-device, 00:01:02.084 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.347 ==> default: Creating shared folders metadata... 00:01:02.347 ==> default: Starting domain. 00:01:04.325 ==> default: Waiting for domain to get an IP address... 00:01:22.459 ==> default: Waiting for SSH to become available... 00:01:23.846 ==> default: Configuring and enabling network interfaces... 00:01:28.163 default: SSH address: 192.168.121.14:22 00:01:28.163 default: SSH username: vagrant 00:01:28.163 default: SSH auth method: private key 00:01:30.082 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.217 ==> default: Mounting SSHFS shared folder... 00:01:39.159 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:39.159 ==> default: Checking Mount.. 00:01:40.547 ==> default: Folder Successfully Mounted! 00:01:40.547 00:01:40.547 SUCCESS! 00:01:40.547 00:01:40.547 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:40.547 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:40.547 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:40.547 00:01:40.558 [Pipeline] } 00:01:40.574 [Pipeline] // stage 00:01:40.583 [Pipeline] dir 00:01:40.584 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:40.586 [Pipeline] { 00:01:40.600 [Pipeline] catchError 00:01:40.602 [Pipeline] { 00:01:40.615 [Pipeline] sh 00:01:40.901 + vagrant ssh-config --host vagrant 00:01:40.901 + sed -ne '/^Host/,$p' 00:01:40.901 + tee ssh_conf 00:01:43.512 Host vagrant 00:01:43.512 HostName 192.168.121.14 00:01:43.512 User vagrant 00:01:43.512 Port 22 00:01:43.512 UserKnownHostsFile /dev/null 00:01:43.512 StrictHostKeyChecking no 00:01:43.512 PasswordAuthentication no 00:01:43.512 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:43.512 IdentitiesOnly yes 00:01:43.512 LogLevel FATAL 00:01:43.512 ForwardAgent yes 00:01:43.512 ForwardX11 yes 00:01:43.512 00:01:43.578 [Pipeline] withEnv 00:01:43.580 [Pipeline] { 00:01:43.594 [Pipeline] sh 00:01:43.877 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:43.877 source /etc/os-release 00:01:43.877 [[ -e /image.version ]] && img=$(< /image.version) 00:01:43.877 # Minimal, systemd-like check. 00:01:43.877 if [[ -e /.dockerenv ]]; then 00:01:43.877 # Clear garbage from the node'\''s name: 00:01:43.877 # agt-er_autotest_547-896 -> autotest_547-896 00:01:43.877 # $HOSTNAME is the actual container id 00:01:43.877 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:43.877 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:43.877 # We can assume this is a mount from a host where container is running, 00:01:43.877 # so fetch its hostname to easily identify the target swarm worker. 00:01:43.877 container="$(< /etc/hostname) ($agent)" 00:01:43.877 else 00:01:43.877 # Fallback 00:01:43.877 container=$agent 00:01:43.877 fi 00:01:43.877 fi 00:01:43.877 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:43.877 ' 00:01:44.152 [Pipeline] } 00:01:44.170 [Pipeline] // withEnv 00:01:44.181 [Pipeline] setCustomBuildProperty 00:01:44.198 [Pipeline] stage 00:01:44.201 [Pipeline] { (Tests) 00:01:44.220 [Pipeline] sh 00:01:44.508 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:44.787 [Pipeline] sh 00:01:45.075 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:45.353 [Pipeline] timeout 00:01:45.353 Timeout set to expire in 1 hr 30 min 00:01:45.355 [Pipeline] { 00:01:45.368 [Pipeline] sh 00:01:45.653 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:46.254 HEAD is now at bfbfb6d81 util: handle events for fd type eventfd 00:01:46.267 [Pipeline] sh 00:01:46.553 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:46.830 [Pipeline] sh 00:01:47.116 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:47.393 [Pipeline] sh 00:01:47.676 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:01:47.936 ++ readlink -f spdk_repo 00:01:47.936 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:47.936 + [[ -n /home/vagrant/spdk_repo ]] 00:01:47.936 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:47.936 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:47.936 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:47.936 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:47.936 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:47.936 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:47.936 + cd /home/vagrant/spdk_repo 00:01:47.936 + source /etc/os-release 00:01:47.936 ++ NAME='Fedora Linux' 00:01:47.936 ++ VERSION='39 (Cloud Edition)' 00:01:47.936 ++ ID=fedora 00:01:47.936 ++ VERSION_ID=39 00:01:47.936 ++ VERSION_CODENAME= 00:01:47.937 ++ PLATFORM_ID=platform:f39 00:01:47.937 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:47.937 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.937 ++ LOGO=fedora-logo-icon 00:01:47.937 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:47.937 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.937 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:47.937 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.937 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.937 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.937 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:47.937 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.937 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:47.937 ++ SUPPORT_END=2024-11-12 00:01:47.937 ++ VARIANT='Cloud Edition' 00:01:47.937 ++ VARIANT_ID=cloud 00:01:47.937 + uname -a 00:01:47.937 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:47.937 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:48.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:48.197 Hugepages 00:01:48.197 node hugesize free / total 00:01:48.197 node0 1048576kB 0 / 0 00:01:48.197 node0 2048kB 0 / 0 00:01:48.197 00:01:48.197 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:48.522 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:48.522 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:48.522 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:48.522 + rm -f /tmp/spdk-ld-path 00:01:48.522 + source autorun-spdk.conf 00:01:48.522 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.522 ++ SPDK_RUN_ASAN=1 00:01:48.522 ++ SPDK_RUN_UBSAN=1 00:01:48.522 ++ SPDK_TEST_RAID=1 00:01:48.522 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.522 ++ RUN_NIGHTLY=0 00:01:48.522 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:48.522 + [[ -n '' ]] 00:01:48.522 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:48.522 + for M in /var/spdk/build-*-manifest.txt 00:01:48.522 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:48.522 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.522 + for M in /var/spdk/build-*-manifest.txt 00:01:48.522 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:48.522 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.522 + for M in /var/spdk/build-*-manifest.txt 00:01:48.522 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:48.522 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.522 ++ uname 00:01:48.522 + [[ Linux == \L\i\n\u\x ]] 00:01:48.522 + sudo dmesg -T 00:01:48.522 + sudo dmesg --clear 00:01:48.522 + dmesg_pid=5002 00:01:48.522 + [[ Fedora Linux == FreeBSD ]] 00:01:48.522 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.522 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.522 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:48.522 + [[ -x /usr/src/fio-static/fio ]] 00:01:48.522 + sudo dmesg -Tw 00:01:48.522 + export FIO_BIN=/usr/src/fio-static/fio 00:01:48.522 + FIO_BIN=/usr/src/fio-static/fio 00:01:48.522 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:48.522 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:48.522 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:48.522 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.522 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.522 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:48.522 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.522 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.522 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.522 09:36:27 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:48.522 09:36:27 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.522 09:36:27 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:48.522 09:36:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:48.522 09:36:27 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.522 09:36:27 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:48.522 09:36:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:48.522 09:36:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:48.522 09:36:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:48.522 09:36:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:48.522 09:36:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:48.522 09:36:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.522 09:36:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.522 09:36:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.522 09:36:27 -- paths/export.sh@5 -- $ export PATH 00:01:48.523 09:36:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.523 09:36:27 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:48.523 09:36:27 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:48.783 09:36:27 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730280987.XXXXXX 00:01:48.783 09:36:27 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730280987.UifHhG 00:01:48.784 09:36:27 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:48.784 09:36:27 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:48.784 09:36:27 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:48.784 09:36:27 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:48.784 09:36:27 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:48.784 09:36:27 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:48.784 09:36:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:48.784 09:36:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.784 09:36:27 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:48.784 09:36:27 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:48.784 09:36:27 -- pm/common@17 -- $ local monitor 00:01:48.784 09:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.784 09:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.784 09:36:27 -- pm/common@25 -- $ sleep 1 00:01:48.784 09:36:27 -- pm/common@21 -- $ date +%s 00:01:48.784 09:36:27 -- pm/common@21 -- $ date +%s 00:01:48.784 09:36:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730280987 00:01:48.784 09:36:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730280987 00:01:48.784 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730280987_collect-cpu-load.pm.log 00:01:48.784 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730280987_collect-vmstat.pm.log 00:01:49.727 09:36:28 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:49.727 09:36:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:49.727 09:36:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:49.727 09:36:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:49.727 09:36:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:49.727 Wed Oct 30 09:36:28 AM UTC 2024 00:01:49.727 09:36:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.727 v25.01-pre-132-gbfbfb6d81 00:01:49.727 09:36:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:49.727 09:36:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:49.727 09:36:28 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:49.727 09:36:28 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:49.727 09:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.727 ************************************ 00:01:49.727 START TEST asan 00:01:49.727 ************************************ 00:01:49.727 using asan 00:01:49.727 09:36:28 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:49.727 00:01:49.727 real 0m0.000s 00:01:49.727 user 0m0.000s 00:01:49.727 sys 0m0.000s 00:01:49.727 09:36:28 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:49.727 09:36:28 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.727 ************************************ 00:01:49.727 END TEST asan 00:01:49.727 ************************************ 00:01:49.727 09:36:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:49.727 09:36:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:49.727 09:36:28 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:49.727 09:36:28 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:49.727 09:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.727 ************************************ 00:01:49.727 START TEST ubsan 00:01:49.727 ************************************ 00:01:49.727 using ubsan 00:01:49.727 09:36:28 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:49.727 00:01:49.727 real 0m0.000s 00:01:49.727 user 0m0.000s 00:01:49.727 sys 0m0.000s 00:01:49.727 09:36:28 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:49.727 09:36:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.727 ************************************ 00:01:49.727 END TEST ubsan 00:01:49.727 ************************************ 00:01:49.727 09:36:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:49.727 09:36:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.727 09:36:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.727 09:36:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:49.989 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:49.989 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:50.249 Using 'verbs' RDMA provider 00:02:01.234 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:13.526 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:13.526 Creating mk/config.mk...done. 00:02:13.526 Creating mk/cc.flags.mk...done. 00:02:13.526 Type 'make' to build. 00:02:13.526 09:36:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:13.526 09:36:50 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:13.526 09:36:50 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:13.526 09:36:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.526 ************************************ 00:02:13.526 START TEST make 00:02:13.526 ************************************ 00:02:13.526 09:36:50 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:13.526 make[1]: Nothing to be done for 'all'. 00:02:23.613 The Meson build system 00:02:23.613 Version: 1.5.0 00:02:23.613 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.613 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.613 Build type: native build 00:02:23.613 Program cat found: YES (/usr/bin/cat) 00:02:23.613 Project name: DPDK 00:02:23.613 Project version: 24.03.0 00:02:23.613 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.613 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.613 Host machine cpu family: x86_64 00:02:23.613 Host machine cpu: x86_64 00:02:23.613 Message: ## Building in Developer Mode ## 00:02:23.613 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.613 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.613 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.613 Program python3 found: YES (/usr/bin/python3) 00:02:23.613 Program cat found: YES (/usr/bin/cat) 00:02:23.613 Compiler for C supports arguments -march=native: YES 00:02:23.613 Checking for size of "void *" : 8 00:02:23.613 Checking for size of "void *" : 8 (cached) 00:02:23.613 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.613 Library m found: YES 00:02:23.613 Library numa found: YES 00:02:23.613 Has header "numaif.h" : YES 00:02:23.613 Library fdt found: NO 00:02:23.613 Library execinfo found: NO 00:02:23.613 Has header "execinfo.h" : YES 00:02:23.613 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.613 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.613 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.613 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.613 Run-time dependency openssl found: YES 3.1.1 00:02:23.613 Run-time dependency libpcap found: YES 1.10.4 00:02:23.613 Has header "pcap.h" with dependency libpcap: YES 00:02:23.613 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.613 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.613 Compiler for C supports arguments -Wformat: YES 00:02:23.613 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.613 Compiler for C supports arguments -Wformat-security: NO 00:02:23.613 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.613 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.613 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.613 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.613 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.613 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.613 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.613 Compiler for C supports arguments -Wundef: YES 00:02:23.613 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.613 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.613 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.613 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.613 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.613 Program objdump found: YES (/usr/bin/objdump) 00:02:23.613 Compiler for C supports arguments -mavx512f: YES 00:02:23.613 Checking if "AVX512 checking" compiles: YES 00:02:23.613 Fetching value of define "__SSE4_2__" : 1 00:02:23.613 Fetching value of define "__AES__" : 1 00:02:23.613 Fetching value of define "__AVX__" : 1 00:02:23.613 Fetching value of define "__AVX2__" : 1 00:02:23.613 Fetching value of define "__AVX512BW__" : 1 00:02:23.613 Fetching value of define "__AVX512CD__" : 1 00:02:23.613 Fetching value of define "__AVX512DQ__" : 1 00:02:23.613 Fetching value of define "__AVX512F__" : 1 00:02:23.613 Fetching value of define "__AVX512VL__" : 1 00:02:23.613 Fetching value of define "__PCLMUL__" : 1 00:02:23.613 Fetching value of define "__RDRND__" : 1 00:02:23.613 Fetching value of define "__RDSEED__" : 1 00:02:23.613 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:23.613 Fetching value of define "__znver1__" : (undefined) 00:02:23.613 Fetching value of define "__znver2__" : (undefined) 00:02:23.613 Fetching value of define "__znver3__" : (undefined) 00:02:23.613 Fetching value of define "__znver4__" : (undefined) 00:02:23.613 Library asan found: YES 00:02:23.613 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.613 Message: lib/log: Defining dependency "log" 00:02:23.613 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.613 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.613 Library rt found: YES 00:02:23.613 Checking for function "getentropy" : NO 00:02:23.613 Message: lib/eal: Defining dependency "eal" 00:02:23.613 Message: lib/ring: Defining dependency "ring" 00:02:23.613 Message: lib/rcu: Defining dependency "rcu" 00:02:23.613 Message: lib/mempool: Defining dependency "mempool" 00:02:23.613 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.613 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.613 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.613 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.613 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.613 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.613 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:23.613 Compiler for C supports arguments -mpclmul: YES 00:02:23.613 Compiler for C supports arguments -maes: YES 00:02:23.613 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.613 Compiler for C supports arguments -mavx512bw: YES 00:02:23.613 Compiler for C supports arguments -mavx512dq: YES 00:02:23.613 Compiler for C supports arguments -mavx512vl: YES 00:02:23.613 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.613 Compiler for C supports arguments -mavx2: YES 00:02:23.613 Compiler for C supports arguments -mavx: YES 00:02:23.613 Message: lib/net: Defining dependency "net" 00:02:23.613 Message: lib/meter: Defining dependency "meter" 00:02:23.613 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.613 Message: lib/pci: Defining dependency "pci" 00:02:23.613 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.613 Message: lib/hash: Defining dependency "hash" 00:02:23.613 Message: lib/timer: Defining dependency "timer" 00:02:23.613 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.613 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.613 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.613 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.613 Message: lib/power: Defining dependency "power" 00:02:23.613 Message: lib/reorder: Defining dependency "reorder" 00:02:23.613 Message: lib/security: Defining dependency "security" 00:02:23.613 Has header "linux/userfaultfd.h" : YES 00:02:23.613 Has header "linux/vduse.h" : YES 00:02:23.613 Message: lib/vhost: Defining dependency "vhost" 00:02:23.613 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.613 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.613 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.613 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.613 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.613 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.613 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.613 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.613 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.613 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.613 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.613 Configuring doxy-api-html.conf using configuration 00:02:23.613 Configuring doxy-api-man.conf using configuration 00:02:23.613 Program mandb found: YES (/usr/bin/mandb) 00:02:23.613 Program sphinx-build found: NO 00:02:23.613 Configuring rte_build_config.h using configuration 00:02:23.613 Message: 00:02:23.613 ================= 00:02:23.613 Applications Enabled 00:02:23.613 ================= 00:02:23.613 00:02:23.613 apps: 00:02:23.613 00:02:23.613 00:02:23.613 Message: 00:02:23.613 ================= 00:02:23.613 Libraries Enabled 00:02:23.613 ================= 00:02:23.613 00:02:23.613 libs: 00:02:23.613 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.613 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.613 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.613 00:02:23.613 Message: 00:02:23.613 =============== 00:02:23.613 Drivers Enabled 00:02:23.613 =============== 00:02:23.613 00:02:23.613 common: 00:02:23.613 00:02:23.613 bus: 00:02:23.613 pci, vdev, 00:02:23.613 mempool: 00:02:23.613 ring, 00:02:23.613 dma: 00:02:23.613 00:02:23.613 net: 00:02:23.613 00:02:23.613 crypto: 00:02:23.613 00:02:23.613 compress: 00:02:23.613 00:02:23.613 vdpa: 00:02:23.613 00:02:23.613 00:02:23.613 Message: 00:02:23.613 ================= 00:02:23.613 Content Skipped 00:02:23.613 ================= 00:02:23.613 00:02:23.613 apps: 00:02:23.613 dumpcap: explicitly disabled via build config 00:02:23.613 graph: explicitly disabled via build config 00:02:23.613 pdump: explicitly disabled via build config 00:02:23.613 proc-info: explicitly disabled via build config 00:02:23.613 test-acl: explicitly disabled via build config 00:02:23.613 test-bbdev: explicitly disabled via build config 00:02:23.613 test-cmdline: explicitly disabled via build config 00:02:23.613 test-compress-perf: explicitly disabled via build config 00:02:23.613 test-crypto-perf: explicitly disabled via build config 00:02:23.613 test-dma-perf: explicitly disabled via build config 00:02:23.613 test-eventdev: explicitly disabled via build config 00:02:23.613 test-fib: explicitly disabled via build config 00:02:23.613 test-flow-perf: explicitly disabled via build config 00:02:23.613 test-gpudev: explicitly disabled via build config 00:02:23.613 test-mldev: explicitly disabled via build config 00:02:23.613 test-pipeline: explicitly disabled via build config 00:02:23.613 test-pmd: explicitly disabled via build config 00:02:23.613 test-regex: explicitly disabled via build config 00:02:23.613 test-sad: explicitly disabled via build config 00:02:23.613 test-security-perf: explicitly disabled via build config 00:02:23.613 00:02:23.613 libs: 00:02:23.614 argparse: explicitly disabled via build config 00:02:23.614 metrics: explicitly disabled via build config 00:02:23.614 acl: explicitly disabled via build config 00:02:23.614 bbdev: explicitly disabled via build config 00:02:23.614 bitratestats: explicitly disabled via build config 00:02:23.614 bpf: explicitly disabled via build config 00:02:23.614 cfgfile: explicitly disabled via build config 00:02:23.614 distributor: explicitly disabled via build config 00:02:23.614 efd: explicitly disabled via build config 00:02:23.614 eventdev: explicitly disabled via build config 00:02:23.614 dispatcher: explicitly disabled via build config 00:02:23.614 gpudev: explicitly disabled via build config 00:02:23.614 gro: explicitly disabled via build config 00:02:23.614 gso: explicitly disabled via build config 00:02:23.614 ip_frag: explicitly disabled via build config 00:02:23.614 jobstats: explicitly disabled via build config 00:02:23.614 latencystats: explicitly disabled via build config 00:02:23.614 lpm: explicitly disabled via build config 00:02:23.614 member: explicitly disabled via build config 00:02:23.614 pcapng: explicitly disabled via build config 00:02:23.614 rawdev: explicitly disabled via build config 00:02:23.614 regexdev: explicitly disabled via build config 00:02:23.614 mldev: explicitly disabled via build config 00:02:23.614 rib: explicitly disabled via build config 00:02:23.614 sched: explicitly disabled via build config 00:02:23.614 stack: explicitly disabled via build config 00:02:23.614 ipsec: explicitly disabled via build config 00:02:23.614 pdcp: explicitly disabled via build config 00:02:23.614 fib: explicitly disabled via build config 00:02:23.614 port: explicitly disabled via build config 00:02:23.614 pdump: explicitly disabled via build config 00:02:23.614 table: explicitly disabled via build config 00:02:23.614 pipeline: explicitly disabled via build config 00:02:23.614 graph: explicitly disabled via build config 00:02:23.614 node: explicitly disabled via build config 00:02:23.614 00:02:23.614 drivers: 00:02:23.614 common/cpt: not in enabled drivers build config 00:02:23.614 common/dpaax: not in enabled drivers build config 00:02:23.614 common/iavf: not in enabled drivers build config 00:02:23.614 common/idpf: not in enabled drivers build config 00:02:23.614 common/ionic: not in enabled drivers build config 00:02:23.614 common/mvep: not in enabled drivers build config 00:02:23.614 common/octeontx: not in enabled drivers build config 00:02:23.614 bus/auxiliary: not in enabled drivers build config 00:02:23.614 bus/cdx: not in enabled drivers build config 00:02:23.614 bus/dpaa: not in enabled drivers build config 00:02:23.614 bus/fslmc: not in enabled drivers build config 00:02:23.614 bus/ifpga: not in enabled drivers build config 00:02:23.614 bus/platform: not in enabled drivers build config 00:02:23.614 bus/uacce: not in enabled drivers build config 00:02:23.614 bus/vmbus: not in enabled drivers build config 00:02:23.614 common/cnxk: not in enabled drivers build config 00:02:23.614 common/mlx5: not in enabled drivers build config 00:02:23.614 common/nfp: not in enabled drivers build config 00:02:23.614 common/nitrox: not in enabled drivers build config 00:02:23.614 common/qat: not in enabled drivers build config 00:02:23.614 common/sfc_efx: not in enabled drivers build config 00:02:23.614 mempool/bucket: not in enabled drivers build config 00:02:23.614 mempool/cnxk: not in enabled drivers build config 00:02:23.614 mempool/dpaa: not in enabled drivers build config 00:02:23.614 mempool/dpaa2: not in enabled drivers build config 00:02:23.614 mempool/octeontx: not in enabled drivers build config 00:02:23.614 mempool/stack: not in enabled drivers build config 00:02:23.614 dma/cnxk: not in enabled drivers build config 00:02:23.614 dma/dpaa: not in enabled drivers build config 00:02:23.614 dma/dpaa2: not in enabled drivers build config 00:02:23.614 dma/hisilicon: not in enabled drivers build config 00:02:23.614 dma/idxd: not in enabled drivers build config 00:02:23.614 dma/ioat: not in enabled drivers build config 00:02:23.614 dma/skeleton: not in enabled drivers build config 00:02:23.614 net/af_packet: not in enabled drivers build config 00:02:23.614 net/af_xdp: not in enabled drivers build config 00:02:23.614 net/ark: not in enabled drivers build config 00:02:23.614 net/atlantic: not in enabled drivers build config 00:02:23.614 net/avp: not in enabled drivers build config 00:02:23.614 net/axgbe: not in enabled drivers build config 00:02:23.614 net/bnx2x: not in enabled drivers build config 00:02:23.614 net/bnxt: not in enabled drivers build config 00:02:23.614 net/bonding: not in enabled drivers build config 00:02:23.614 net/cnxk: not in enabled drivers build config 00:02:23.614 net/cpfl: not in enabled drivers build config 00:02:23.614 net/cxgbe: not in enabled drivers build config 00:02:23.614 net/dpaa: not in enabled drivers build config 00:02:23.614 net/dpaa2: not in enabled drivers build config 00:02:23.614 net/e1000: not in enabled drivers build config 00:02:23.614 net/ena: not in enabled drivers build config 00:02:23.614 net/enetc: not in enabled drivers build config 00:02:23.614 net/enetfec: not in enabled drivers build config 00:02:23.614 net/enic: not in enabled drivers build config 00:02:23.614 net/failsafe: not in enabled drivers build config 00:02:23.614 net/fm10k: not in enabled drivers build config 00:02:23.614 net/gve: not in enabled drivers build config 00:02:23.614 net/hinic: not in enabled drivers build config 00:02:23.614 net/hns3: not in enabled drivers build config 00:02:23.614 net/i40e: not in enabled drivers build config 00:02:23.614 net/iavf: not in enabled drivers build config 00:02:23.614 net/ice: not in enabled drivers build config 00:02:23.614 net/idpf: not in enabled drivers build config 00:02:23.614 net/igc: not in enabled drivers build config 00:02:23.614 net/ionic: not in enabled drivers build config 00:02:23.614 net/ipn3ke: not in enabled drivers build config 00:02:23.614 net/ixgbe: not in enabled drivers build config 00:02:23.614 net/mana: not in enabled drivers build config 00:02:23.614 net/memif: not in enabled drivers build config 00:02:23.614 net/mlx4: not in enabled drivers build config 00:02:23.614 net/mlx5: not in enabled drivers build config 00:02:23.614 net/mvneta: not in enabled drivers build config 00:02:23.614 net/mvpp2: not in enabled drivers build config 00:02:23.614 net/netvsc: not in enabled drivers build config 00:02:23.614 net/nfb: not in enabled drivers build config 00:02:23.614 net/nfp: not in enabled drivers build config 00:02:23.614 net/ngbe: not in enabled drivers build config 00:02:23.614 net/null: not in enabled drivers build config 00:02:23.614 net/octeontx: not in enabled drivers build config 00:02:23.614 net/octeon_ep: not in enabled drivers build config 00:02:23.614 net/pcap: not in enabled drivers build config 00:02:23.614 net/pfe: not in enabled drivers build config 00:02:23.614 net/qede: not in enabled drivers build config 00:02:23.614 net/ring: not in enabled drivers build config 00:02:23.614 net/sfc: not in enabled drivers build config 00:02:23.614 net/softnic: not in enabled drivers build config 00:02:23.614 net/tap: not in enabled drivers build config 00:02:23.614 net/thunderx: not in enabled drivers build config 00:02:23.614 net/txgbe: not in enabled drivers build config 00:02:23.614 net/vdev_netvsc: not in enabled drivers build config 00:02:23.614 net/vhost: not in enabled drivers build config 00:02:23.614 net/virtio: not in enabled drivers build config 00:02:23.614 net/vmxnet3: not in enabled drivers build config 00:02:23.614 raw/*: missing internal dependency, "rawdev" 00:02:23.614 crypto/armv8: not in enabled drivers build config 00:02:23.614 crypto/bcmfs: not in enabled drivers build config 00:02:23.614 crypto/caam_jr: not in enabled drivers build config 00:02:23.614 crypto/ccp: not in enabled drivers build config 00:02:23.614 crypto/cnxk: not in enabled drivers build config 00:02:23.614 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.614 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.614 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.614 crypto/mlx5: not in enabled drivers build config 00:02:23.614 crypto/mvsam: not in enabled drivers build config 00:02:23.614 crypto/nitrox: not in enabled drivers build config 00:02:23.614 crypto/null: not in enabled drivers build config 00:02:23.614 crypto/octeontx: not in enabled drivers build config 00:02:23.614 crypto/openssl: not in enabled drivers build config 00:02:23.614 crypto/scheduler: not in enabled drivers build config 00:02:23.614 crypto/uadk: not in enabled drivers build config 00:02:23.614 crypto/virtio: not in enabled drivers build config 00:02:23.614 compress/isal: not in enabled drivers build config 00:02:23.614 compress/mlx5: not in enabled drivers build config 00:02:23.614 compress/nitrox: not in enabled drivers build config 00:02:23.614 compress/octeontx: not in enabled drivers build config 00:02:23.614 compress/zlib: not in enabled drivers build config 00:02:23.614 regex/*: missing internal dependency, "regexdev" 00:02:23.614 ml/*: missing internal dependency, "mldev" 00:02:23.614 vdpa/ifc: not in enabled drivers build config 00:02:23.614 vdpa/mlx5: not in enabled drivers build config 00:02:23.614 vdpa/nfp: not in enabled drivers build config 00:02:23.614 vdpa/sfc: not in enabled drivers build config 00:02:23.614 event/*: missing internal dependency, "eventdev" 00:02:23.614 baseband/*: missing internal dependency, "bbdev" 00:02:23.614 gpu/*: missing internal dependency, "gpudev" 00:02:23.614 00:02:23.614 00:02:23.614 Build targets in project: 84 00:02:23.614 00:02:23.614 DPDK 24.03.0 00:02:23.614 00:02:23.614 User defined options 00:02:23.614 buildtype : debug 00:02:23.614 default_library : shared 00:02:23.614 libdir : lib 00:02:23.614 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.614 b_sanitize : address 00:02:23.614 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.614 c_link_args : 00:02:23.614 cpu_instruction_set: native 00:02:23.614 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.614 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.614 enable_docs : false 00:02:23.614 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.614 enable_kmods : false 00:02:23.614 max_lcores : 128 00:02:23.614 tests : false 00:02:23.614 00:02:23.614 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.876 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:23.876 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.876 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.876 [3/267] Linking static target lib/librte_log.a 00:02:24.137 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.137 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.137 [6/267] Linking static target lib/librte_kvargs.a 00:02:24.137 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.137 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.444 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.444 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.444 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.444 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.444 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.444 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.444 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.444 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.702 [17/267] Linking static target lib/librte_telemetry.a 00:02:24.702 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.702 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.702 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.963 [21/267] Linking target lib/librte_log.so.24.1 00:02:24.963 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.963 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.963 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.963 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.963 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.963 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.963 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.222 [29/267] Linking target lib/librte_kvargs.so.24.1 00:02:25.222 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.222 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.222 [32/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.222 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.222 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.481 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.481 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.481 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:25.481 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.481 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.481 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.481 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.740 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.740 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.740 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.740 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.740 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.740 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.740 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.001 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.001 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.001 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.001 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.262 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.262 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.262 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.262 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.262 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.262 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.262 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.262 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.522 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.522 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.522 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.522 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.522 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.522 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.522 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.781 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.781 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.040 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.040 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.040 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.040 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.040 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.040 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.040 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.040 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.040 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.040 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.040 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.040 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.299 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.299 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.299 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.558 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.558 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.558 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.558 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.558 [89/267] Linking static target lib/librte_eal.a 00:02:27.558 [90/267] Linking static target lib/librte_rcu.a 00:02:27.558 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.558 [92/267] Linking static target lib/librte_mempool.a 00:02:27.558 [93/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.558 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.558 [95/267] Linking static target lib/librte_ring.a 00:02:27.558 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.817 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.817 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.075 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.075 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.075 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.075 [102/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.075 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.334 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.334 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:28.334 [106/267] Linking static target lib/librte_net.a 00:02:28.593 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.593 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.593 [109/267] Linking static target lib/librte_mbuf.a 00:02:28.593 [110/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.593 [111/267] Linking static target lib/librte_meter.a 00:02:28.593 [112/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.593 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.593 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.593 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.593 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.853 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.113 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.113 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.113 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.374 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.374 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.374 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:29.374 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:29.633 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.633 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.633 [127/267] Linking static target lib/librte_pci.a 00:02:29.633 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.633 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.634 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.634 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.634 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.634 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:29.895 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.895 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.895 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.895 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.895 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.895 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.895 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.895 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:30.156 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.156 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:30.156 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.156 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.156 [146/267] Linking static target lib/librte_cmdline.a 00:02:30.156 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.156 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.416 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.416 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.416 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.416 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.676 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.676 [154/267] Linking static target lib/librte_timer.a 00:02:30.938 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.938 [156/267] Linking static target lib/librte_compressdev.a 00:02:30.938 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.938 [158/267] Linking static target lib/librte_ethdev.a 00:02:30.938 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.938 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.938 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.199 [162/267] Linking static target lib/librte_hash.a 00:02:31.199 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.199 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.199 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.199 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.199 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.199 [168/267] Linking static target lib/librte_dmadev.a 00:02:31.460 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:31.460 [170/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.460 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.723 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.723 [173/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:31.723 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:31.983 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:31.983 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.983 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.983 [178/267] Linking static target lib/librte_cryptodev.a 00:02:31.983 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:31.983 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.983 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:31.983 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.983 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.983 [184/267] Linking static target lib/librte_power.a 00:02:31.983 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.553 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.553 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:32.553 [188/267] Linking static target lib/librte_reorder.a 00:02:32.553 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.553 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.838 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:32.838 [192/267] Linking static target lib/librte_security.a 00:02:32.838 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.100 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.100 [195/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.361 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:33.361 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.620 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.620 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:33.620 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:33.620 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.880 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.880 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.880 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.880 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:33.880 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:33.880 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.880 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.880 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:34.142 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.142 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:34.142 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:34.142 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.142 [214/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.142 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.142 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:34.142 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.403 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:34.403 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:34.403 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.403 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.664 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:34.664 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.664 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.664 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:34.664 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.377 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.950 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.950 [229/267] Linking target lib/librte_eal.so.24.1 00:02:36.212 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:36.212 [231/267] Linking target lib/librte_ring.so.24.1 00:02:36.212 [232/267] Linking target lib/librte_meter.so.24.1 00:02:36.212 [233/267] Linking target lib/librte_pci.so.24.1 00:02:36.212 [234/267] Linking target lib/librte_timer.so.24.1 00:02:36.212 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:36.212 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:36.212 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:36.212 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:36.473 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:36.473 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:36.473 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:36.473 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:36.473 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:36.473 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:36.473 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:36.473 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:36.473 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:36.473 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:36.735 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:36.735 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:36.735 [251/267] Linking target lib/librte_net.so.24.1 00:02:36.735 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:36.735 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:36.735 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.735 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.735 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:36.735 [257/267] Linking target lib/librte_hash.so.24.1 00:02:36.735 [258/267] Linking target lib/librte_security.so.24.1 00:02:36.996 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:37.256 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.516 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:37.516 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:37.517 [263/267] Linking target lib/librte_power.so.24.1 00:02:38.901 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.901 [265/267] Linking static target lib/librte_vhost.a 00:02:40.288 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.288 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:40.288 INFO: autodetecting backend as ninja 00:02:40.288 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:58.409 CC lib/ut_mock/mock.o 00:02:58.409 CC lib/log/log.o 00:02:58.409 CC lib/log/log_flags.o 00:02:58.409 CC lib/log/log_deprecated.o 00:02:58.409 CC lib/ut/ut.o 00:02:58.409 LIB libspdk_ut.a 00:02:58.409 LIB libspdk_ut_mock.a 00:02:58.409 LIB libspdk_log.a 00:02:58.409 SO libspdk_ut_mock.so.6.0 00:02:58.409 SO libspdk_ut.so.2.0 00:02:58.409 SO libspdk_log.so.7.1 00:02:58.409 SYMLINK libspdk_ut.so 00:02:58.409 SYMLINK libspdk_ut_mock.so 00:02:58.409 SYMLINK libspdk_log.so 00:02:58.409 CC lib/util/base64.o 00:02:58.409 CC lib/util/bit_array.o 00:02:58.409 CC lib/util/cpuset.o 00:02:58.409 CXX lib/trace_parser/trace.o 00:02:58.409 CC lib/util/crc16.o 00:02:58.409 CC lib/util/crc32.o 00:02:58.409 CC lib/util/crc32c.o 00:02:58.409 CC lib/dma/dma.o 00:02:58.409 CC lib/ioat/ioat.o 00:02:58.409 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.409 CC lib/util/crc64.o 00:02:58.409 CC lib/util/crc32_ieee.o 00:02:58.409 CC lib/util/dif.o 00:02:58.409 CC lib/util/fd.o 00:02:58.409 LIB libspdk_dma.a 00:02:58.409 CC lib/util/fd_group.o 00:02:58.409 SO libspdk_dma.so.5.0 00:02:58.409 CC lib/vfio_user/host/vfio_user.o 00:02:58.409 CC lib/util/file.o 00:02:58.409 CC lib/util/hexlify.o 00:02:58.409 SYMLINK libspdk_dma.so 00:02:58.409 CC lib/util/iov.o 00:02:58.409 LIB libspdk_ioat.a 00:02:58.409 CC lib/util/math.o 00:02:58.409 SO libspdk_ioat.so.7.0 00:02:58.409 CC lib/util/net.o 00:02:58.409 SYMLINK libspdk_ioat.so 00:02:58.409 CC lib/util/pipe.o 00:02:58.409 CC lib/util/strerror_tls.o 00:02:58.409 CC lib/util/string.o 00:02:58.409 CC lib/util/uuid.o 00:02:58.409 CC lib/util/xor.o 00:02:58.409 LIB libspdk_vfio_user.a 00:02:58.409 CC lib/util/zipf.o 00:02:58.409 SO libspdk_vfio_user.so.5.0 00:02:58.409 CC lib/util/md5.o 00:02:58.409 SYMLINK libspdk_vfio_user.so 00:02:58.409 LIB libspdk_util.a 00:02:58.409 SO libspdk_util.so.10.1 00:02:58.409 LIB libspdk_trace_parser.a 00:02:58.409 SYMLINK libspdk_util.so 00:02:58.409 SO libspdk_trace_parser.so.6.0 00:02:58.409 SYMLINK libspdk_trace_parser.so 00:02:58.409 CC lib/rdma_provider/common.o 00:02:58.409 CC lib/json/json_parse.o 00:02:58.409 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.409 CC lib/conf/conf.o 00:02:58.409 CC lib/json/json_write.o 00:02:58.409 CC lib/json/json_util.o 00:02:58.409 CC lib/idxd/idxd.o 00:02:58.409 CC lib/vmd/vmd.o 00:02:58.409 CC lib/env_dpdk/env.o 00:02:58.409 CC lib/rdma_utils/rdma_utils.o 00:02:58.409 CC lib/idxd/idxd_user.o 00:02:58.409 LIB libspdk_rdma_provider.a 00:02:58.409 SO libspdk_rdma_provider.so.6.0 00:02:58.409 LIB libspdk_conf.a 00:02:58.409 CC lib/idxd/idxd_kernel.o 00:02:58.409 CC lib/env_dpdk/memory.o 00:02:58.409 SO libspdk_conf.so.6.0 00:02:58.409 SYMLINK libspdk_rdma_provider.so 00:02:58.409 CC lib/env_dpdk/pci.o 00:02:58.409 LIB libspdk_rdma_utils.a 00:02:58.409 LIB libspdk_json.a 00:02:58.409 SYMLINK libspdk_conf.so 00:02:58.409 SO libspdk_rdma_utils.so.1.0 00:02:58.409 CC lib/vmd/led.o 00:02:58.409 SO libspdk_json.so.6.0 00:02:58.409 SYMLINK libspdk_rdma_utils.so 00:02:58.409 CC lib/env_dpdk/init.o 00:02:58.409 SYMLINK libspdk_json.so 00:02:58.409 CC lib/env_dpdk/threads.o 00:02:58.409 CC lib/env_dpdk/pci_ioat.o 00:02:58.409 CC lib/env_dpdk/pci_virtio.o 00:02:58.409 CC lib/env_dpdk/pci_vmd.o 00:02:58.409 CC lib/env_dpdk/pci_idxd.o 00:02:58.409 CC lib/env_dpdk/pci_event.o 00:02:58.669 CC lib/jsonrpc/jsonrpc_server.o 00:02:58.669 CC lib/env_dpdk/sigbus_handler.o 00:02:58.669 CC lib/env_dpdk/pci_dpdk.o 00:02:58.669 LIB libspdk_idxd.a 00:02:58.669 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:58.669 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:58.669 SO libspdk_idxd.so.12.1 00:02:58.669 LIB libspdk_vmd.a 00:02:58.669 SYMLINK libspdk_idxd.so 00:02:58.669 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:58.669 CC lib/jsonrpc/jsonrpc_client.o 00:02:58.669 SO libspdk_vmd.so.6.0 00:02:58.669 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.669 SYMLINK libspdk_vmd.so 00:02:58.930 LIB libspdk_jsonrpc.a 00:02:58.930 SO libspdk_jsonrpc.so.6.0 00:02:58.930 SYMLINK libspdk_jsonrpc.so 00:02:59.190 CC lib/rpc/rpc.o 00:02:59.450 LIB libspdk_env_dpdk.a 00:02:59.450 LIB libspdk_rpc.a 00:02:59.450 SO libspdk_rpc.so.6.0 00:02:59.450 SO libspdk_env_dpdk.so.15.1 00:02:59.450 SYMLINK libspdk_rpc.so 00:02:59.711 SYMLINK libspdk_env_dpdk.so 00:02:59.711 CC lib/trace/trace.o 00:02:59.711 CC lib/trace/trace_rpc.o 00:02:59.711 CC lib/keyring/keyring.o 00:02:59.711 CC lib/trace/trace_flags.o 00:02:59.711 CC lib/keyring/keyring_rpc.o 00:02:59.711 CC lib/notify/notify.o 00:02:59.711 CC lib/notify/notify_rpc.o 00:02:59.972 LIB libspdk_notify.a 00:02:59.972 SO libspdk_notify.so.6.0 00:02:59.972 SYMLINK libspdk_notify.so 00:02:59.972 LIB libspdk_keyring.a 00:02:59.972 LIB libspdk_trace.a 00:02:59.972 SO libspdk_keyring.so.2.0 00:02:59.972 SO libspdk_trace.so.11.0 00:02:59.972 SYMLINK libspdk_keyring.so 00:03:00.233 SYMLINK libspdk_trace.so 00:03:00.233 CC lib/thread/thread.o 00:03:00.233 CC lib/thread/iobuf.o 00:03:00.233 CC lib/sock/sock_rpc.o 00:03:00.233 CC lib/sock/sock.o 00:03:00.802 LIB libspdk_sock.a 00:03:00.802 SO libspdk_sock.so.10.0 00:03:00.802 SYMLINK libspdk_sock.so 00:03:01.061 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.061 CC lib/nvme/nvme_ctrlr.o 00:03:01.061 CC lib/nvme/nvme_fabric.o 00:03:01.061 CC lib/nvme/nvme_ns_cmd.o 00:03:01.061 CC lib/nvme/nvme_pcie_common.o 00:03:01.061 CC lib/nvme/nvme_ns.o 00:03:01.061 CC lib/nvme/nvme_pcie.o 00:03:01.061 CC lib/nvme/nvme_qpair.o 00:03:01.061 CC lib/nvme/nvme.o 00:03:01.632 CC lib/nvme/nvme_quirks.o 00:03:01.632 CC lib/nvme/nvme_transport.o 00:03:01.892 CC lib/nvme/nvme_discovery.o 00:03:01.892 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.892 LIB libspdk_thread.a 00:03:01.892 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.892 SO libspdk_thread.so.11.0 00:03:01.892 CC lib/nvme/nvme_tcp.o 00:03:01.892 CC lib/nvme/nvme_opal.o 00:03:01.892 SYMLINK libspdk_thread.so 00:03:01.892 CC lib/nvme/nvme_io_msg.o 00:03:02.152 CC lib/nvme/nvme_poll_group.o 00:03:02.152 CC lib/nvme/nvme_zns.o 00:03:02.413 CC lib/nvme/nvme_stubs.o 00:03:02.413 CC lib/nvme/nvme_auth.o 00:03:02.413 CC lib/nvme/nvme_cuse.o 00:03:02.413 CC lib/nvme/nvme_rdma.o 00:03:02.673 CC lib/accel/accel.o 00:03:02.674 CC lib/accel/accel_rpc.o 00:03:02.674 CC lib/blob/blobstore.o 00:03:02.674 CC lib/blob/request.o 00:03:02.674 CC lib/blob/zeroes.o 00:03:02.933 CC lib/accel/accel_sw.o 00:03:02.933 CC lib/blob/blob_bs_dev.o 00:03:03.192 CC lib/init/json_config.o 00:03:03.192 CC lib/virtio/virtio.o 00:03:03.192 CC lib/fsdev/fsdev.o 00:03:03.192 CC lib/virtio/virtio_vhost_user.o 00:03:03.452 CC lib/init/subsystem.o 00:03:03.452 CC lib/init/subsystem_rpc.o 00:03:03.452 CC lib/init/rpc.o 00:03:03.452 CC lib/fsdev/fsdev_io.o 00:03:03.452 CC lib/fsdev/fsdev_rpc.o 00:03:03.452 CC lib/virtio/virtio_vfio_user.o 00:03:03.452 CC lib/virtio/virtio_pci.o 00:03:03.711 LIB libspdk_init.a 00:03:03.711 SO libspdk_init.so.6.0 00:03:03.711 LIB libspdk_accel.a 00:03:03.711 SO libspdk_accel.so.16.0 00:03:03.711 SYMLINK libspdk_init.so 00:03:03.711 SYMLINK libspdk_accel.so 00:03:03.970 LIB libspdk_virtio.a 00:03:03.970 LIB libspdk_nvme.a 00:03:03.970 SO libspdk_virtio.so.7.0 00:03:03.970 CC lib/event/reactor.o 00:03:03.970 CC lib/event/app.o 00:03:03.970 CC lib/event/log_rpc.o 00:03:03.970 CC lib/event/app_rpc.o 00:03:03.970 CC lib/event/scheduler_static.o 00:03:03.970 LIB libspdk_fsdev.a 00:03:03.970 SYMLINK libspdk_virtio.so 00:03:03.970 SO libspdk_fsdev.so.2.0 00:03:03.970 CC lib/bdev/bdev.o 00:03:03.970 CC lib/bdev/bdev_rpc.o 00:03:03.970 SYMLINK libspdk_fsdev.so 00:03:03.970 CC lib/bdev/bdev_zone.o 00:03:03.970 CC lib/bdev/part.o 00:03:03.970 SO libspdk_nvme.so.15.0 00:03:03.970 CC lib/bdev/scsi_nvme.o 00:03:04.232 SYMLINK libspdk_nvme.so 00:03:04.232 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:04.493 LIB libspdk_event.a 00:03:04.493 SO libspdk_event.so.14.0 00:03:04.493 SYMLINK libspdk_event.so 00:03:05.062 LIB libspdk_fuse_dispatcher.a 00:03:05.062 SO libspdk_fuse_dispatcher.so.1.0 00:03:05.062 SYMLINK libspdk_fuse_dispatcher.so 00:03:06.051 LIB libspdk_blob.a 00:03:06.051 SO libspdk_blob.so.11.0 00:03:06.312 SYMLINK libspdk_blob.so 00:03:06.312 CC lib/blobfs/blobfs.o 00:03:06.312 CC lib/lvol/lvol.o 00:03:06.312 CC lib/blobfs/tree.o 00:03:06.882 LIB libspdk_bdev.a 00:03:06.882 SO libspdk_bdev.so.17.0 00:03:07.140 SYMLINK libspdk_bdev.so 00:03:07.140 CC lib/nbd/nbd.o 00:03:07.140 CC lib/nbd/nbd_rpc.o 00:03:07.140 CC lib/scsi/dev.o 00:03:07.140 CC lib/scsi/lun.o 00:03:07.141 CC lib/scsi/port.o 00:03:07.141 CC lib/ftl/ftl_core.o 00:03:07.141 CC lib/nvmf/ctrlr.o 00:03:07.141 CC lib/ublk/ublk.o 00:03:07.401 LIB libspdk_blobfs.a 00:03:07.401 CC lib/ublk/ublk_rpc.o 00:03:07.401 SO libspdk_blobfs.so.10.0 00:03:07.401 CC lib/nvmf/ctrlr_discovery.o 00:03:07.401 LIB libspdk_lvol.a 00:03:07.401 SYMLINK libspdk_blobfs.so 00:03:07.401 CC lib/ftl/ftl_init.o 00:03:07.401 SO libspdk_lvol.so.10.0 00:03:07.401 CC lib/ftl/ftl_layout.o 00:03:07.401 SYMLINK libspdk_lvol.so 00:03:07.401 CC lib/scsi/scsi.o 00:03:07.401 CC lib/scsi/scsi_bdev.o 00:03:07.661 LIB libspdk_nbd.a 00:03:07.661 CC lib/ftl/ftl_debug.o 00:03:07.661 CC lib/ftl/ftl_io.o 00:03:07.661 SO libspdk_nbd.so.7.0 00:03:07.661 CC lib/ftl/ftl_sb.o 00:03:07.661 CC lib/ftl/ftl_l2p.o 00:03:07.661 SYMLINK libspdk_nbd.so 00:03:07.661 CC lib/ftl/ftl_l2p_flat.o 00:03:07.922 CC lib/ftl/ftl_nv_cache.o 00:03:07.922 CC lib/ftl/ftl_band.o 00:03:07.922 LIB libspdk_ublk.a 00:03:07.922 CC lib/scsi/scsi_pr.o 00:03:07.922 CC lib/ftl/ftl_band_ops.o 00:03:07.922 SO libspdk_ublk.so.3.0 00:03:07.922 CC lib/ftl/ftl_writer.o 00:03:07.922 CC lib/nvmf/ctrlr_bdev.o 00:03:07.922 CC lib/nvmf/subsystem.o 00:03:07.922 SYMLINK libspdk_ublk.so 00:03:07.922 CC lib/nvmf/nvmf.o 00:03:07.922 CC lib/scsi/scsi_rpc.o 00:03:08.181 CC lib/ftl/ftl_rq.o 00:03:08.181 CC lib/nvmf/nvmf_rpc.o 00:03:08.181 CC lib/scsi/task.o 00:03:08.181 CC lib/ftl/ftl_reloc.o 00:03:08.181 CC lib/ftl/ftl_l2p_cache.o 00:03:08.441 CC lib/ftl/ftl_p2l.o 00:03:08.441 LIB libspdk_scsi.a 00:03:08.441 SO libspdk_scsi.so.9.0 00:03:08.441 SYMLINK libspdk_scsi.so 00:03:08.441 CC lib/ftl/ftl_p2l_log.o 00:03:08.441 CC lib/nvmf/transport.o 00:03:08.700 CC lib/nvmf/tcp.o 00:03:08.700 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.700 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.962 CC lib/nvmf/stubs.o 00:03:08.962 CC lib/nvmf/mdns_server.o 00:03:08.962 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.962 CC lib/nvmf/rdma.o 00:03:08.962 CC lib/nvmf/auth.o 00:03:08.962 CC lib/iscsi/conn.o 00:03:09.225 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.225 CC lib/vhost/vhost.o 00:03:09.225 CC lib/vhost/vhost_rpc.o 00:03:09.225 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.225 CC lib/vhost/vhost_scsi.o 00:03:09.225 CC lib/vhost/vhost_blk.o 00:03:09.225 CC lib/iscsi/init_grp.o 00:03:09.485 CC lib/iscsi/iscsi.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.745 CC lib/iscsi/param.o 00:03:09.745 CC lib/iscsi/portal_grp.o 00:03:09.745 CC lib/vhost/rte_vhost_user.o 00:03:09.745 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.745 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.005 CC lib/iscsi/tgt_node.o 00:03:10.005 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.005 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.005 CC lib/iscsi/iscsi_subsystem.o 00:03:10.265 CC lib/iscsi/iscsi_rpc.o 00:03:10.265 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.265 CC lib/iscsi/task.o 00:03:10.265 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.265 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.524 CC lib/ftl/utils/ftl_conf.o 00:03:10.524 CC lib/ftl/utils/ftl_md.o 00:03:10.524 CC lib/ftl/utils/ftl_mempool.o 00:03:10.524 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.524 CC lib/ftl/utils/ftl_property.o 00:03:10.524 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.524 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.524 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.784 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.784 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.784 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.784 LIB libspdk_vhost.a 00:03:10.784 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.784 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.784 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.784 SO libspdk_vhost.so.8.0 00:03:11.045 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.045 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.045 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.045 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.045 SYMLINK libspdk_vhost.so 00:03:11.045 CC lib/ftl/base/ftl_base_dev.o 00:03:11.045 LIB libspdk_iscsi.a 00:03:11.045 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.045 CC lib/ftl/ftl_trace.o 00:03:11.045 SO libspdk_iscsi.so.8.0 00:03:11.379 SYMLINK libspdk_iscsi.so 00:03:11.379 LIB libspdk_ftl.a 00:03:11.379 LIB libspdk_nvmf.a 00:03:11.379 SO libspdk_nvmf.so.20.0 00:03:11.379 SO libspdk_ftl.so.9.0 00:03:11.651 SYMLINK libspdk_nvmf.so 00:03:11.651 SYMLINK libspdk_ftl.so 00:03:11.957 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.219 CC module/keyring/file/keyring.o 00:03:12.219 CC module/fsdev/aio/fsdev_aio.o 00:03:12.219 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.219 CC module/blob/bdev/blob_bdev.o 00:03:12.219 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.219 CC module/accel/error/accel_error.o 00:03:12.219 CC module/accel/ioat/accel_ioat.o 00:03:12.219 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.219 CC module/sock/posix/posix.o 00:03:12.219 LIB libspdk_env_dpdk_rpc.a 00:03:12.219 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.219 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.219 CC module/keyring/file/keyring_rpc.o 00:03:12.219 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.219 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.219 LIB libspdk_scheduler_gscheduler.a 00:03:12.219 CC module/accel/error/accel_error_rpc.o 00:03:12.219 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.219 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.219 LIB libspdk_scheduler_dynamic.a 00:03:12.219 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.219 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.219 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.219 LIB libspdk_keyring_file.a 00:03:12.480 SO libspdk_keyring_file.so.2.0 00:03:12.480 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.480 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.480 LIB libspdk_accel_ioat.a 00:03:12.480 LIB libspdk_blob_bdev.a 00:03:12.480 SO libspdk_blob_bdev.so.11.0 00:03:12.480 LIB libspdk_accel_error.a 00:03:12.480 SO libspdk_accel_ioat.so.6.0 00:03:12.480 SO libspdk_accel_error.so.2.0 00:03:12.480 SYMLINK libspdk_keyring_file.so 00:03:12.480 CC module/fsdev/aio/linux_aio_mgr.o 00:03:12.480 CC module/keyring/linux/keyring.o 00:03:12.480 SYMLINK libspdk_accel_ioat.so 00:03:12.480 CC module/accel/dsa/accel_dsa.o 00:03:12.480 SYMLINK libspdk_blob_bdev.so 00:03:12.480 SYMLINK libspdk_accel_error.so 00:03:12.480 CC module/keyring/linux/keyring_rpc.o 00:03:12.480 CC module/accel/iaa/accel_iaa.o 00:03:12.480 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.739 LIB libspdk_keyring_linux.a 00:03:12.739 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.739 SO libspdk_keyring_linux.so.1.0 00:03:12.739 SYMLINK libspdk_keyring_linux.so 00:03:12.739 CC module/bdev/delay/vbdev_delay.o 00:03:12.739 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.739 LIB libspdk_accel_dsa.a 00:03:12.739 LIB libspdk_accel_iaa.a 00:03:12.739 CC module/bdev/error/vbdev_error.o 00:03:12.739 CC module/bdev/gpt/gpt.o 00:03:12.739 SO libspdk_accel_dsa.so.5.0 00:03:12.739 LIB libspdk_fsdev_aio.a 00:03:12.739 SO libspdk_accel_iaa.so.3.0 00:03:12.739 SO libspdk_fsdev_aio.so.1.0 00:03:12.739 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.000 SYMLINK libspdk_accel_dsa.so 00:03:13.000 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.000 SYMLINK libspdk_accel_iaa.so 00:03:13.000 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.000 CC module/bdev/malloc/bdev_malloc.o 00:03:13.000 LIB libspdk_sock_posix.a 00:03:13.000 SYMLINK libspdk_fsdev_aio.so 00:03:13.000 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.000 SO libspdk_sock_posix.so.6.0 00:03:13.000 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.000 SYMLINK libspdk_sock_posix.so 00:03:13.000 LIB libspdk_bdev_error.a 00:03:13.000 LIB libspdk_blobfs_bdev.a 00:03:13.000 SO libspdk_bdev_error.so.6.0 00:03:13.260 CC module/bdev/null/bdev_null.o 00:03:13.260 LIB libspdk_bdev_gpt.a 00:03:13.260 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.260 SO libspdk_blobfs_bdev.so.6.0 00:03:13.260 SO libspdk_bdev_gpt.so.6.0 00:03:13.260 CC module/bdev/nvme/bdev_nvme.o 00:03:13.260 SYMLINK libspdk_bdev_error.so 00:03:13.260 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.260 CC module/bdev/null/bdev_null_rpc.o 00:03:13.260 SYMLINK libspdk_blobfs_bdev.so 00:03:13.260 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:13.260 SYMLINK libspdk_bdev_gpt.so 00:03:13.260 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.260 LIB libspdk_bdev_delay.a 00:03:13.260 SO libspdk_bdev_delay.so.6.0 00:03:13.260 CC module/bdev/raid/bdev_raid.o 00:03:13.519 SYMLINK libspdk_bdev_delay.so 00:03:13.519 LIB libspdk_bdev_null.a 00:03:13.519 LIB libspdk_bdev_lvol.a 00:03:13.519 CC module/bdev/split/vbdev_split.o 00:03:13.519 SO libspdk_bdev_null.so.6.0 00:03:13.519 LIB libspdk_bdev_malloc.a 00:03:13.519 SO libspdk_bdev_lvol.so.6.0 00:03:13.519 SO libspdk_bdev_malloc.so.6.0 00:03:13.519 SYMLINK libspdk_bdev_lvol.so 00:03:13.519 LIB libspdk_bdev_passthru.a 00:03:13.519 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.519 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.519 CC module/bdev/aio/bdev_aio.o 00:03:13.519 SYMLINK libspdk_bdev_null.so 00:03:13.519 SYMLINK libspdk_bdev_malloc.so 00:03:13.519 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.519 CC module/bdev/raid/raid0.o 00:03:13.519 SO libspdk_bdev_passthru.so.6.0 00:03:13.519 CC module/bdev/ftl/bdev_ftl.o 00:03:13.519 SYMLINK libspdk_bdev_passthru.so 00:03:13.519 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:13.777 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.777 CC module/bdev/nvme/nvme_rpc.o 00:03:13.777 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.777 LIB libspdk_bdev_split.a 00:03:13.777 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.777 SO libspdk_bdev_split.so.6.0 00:03:13.777 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.777 CC module/bdev/raid/raid1.o 00:03:13.777 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.777 SYMLINK libspdk_bdev_split.so 00:03:14.035 LIB libspdk_bdev_aio.a 00:03:14.035 CC module/bdev/nvme/vbdev_opal.o 00:03:14.035 SO libspdk_bdev_aio.so.6.0 00:03:14.035 LIB libspdk_bdev_zone_block.a 00:03:14.035 LIB libspdk_bdev_ftl.a 00:03:14.035 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.035 SO libspdk_bdev_zone_block.so.6.0 00:03:14.035 SO libspdk_bdev_ftl.so.6.0 00:03:14.035 SYMLINK libspdk_bdev_aio.so 00:03:14.035 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:14.035 SYMLINK libspdk_bdev_zone_block.so 00:03:14.035 CC module/bdev/raid/concat.o 00:03:14.035 CC module/bdev/raid/raid5f.o 00:03:14.035 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.293 SYMLINK libspdk_bdev_ftl.so 00:03:14.293 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.293 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.293 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.293 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.293 LIB libspdk_bdev_iscsi.a 00:03:14.553 SO libspdk_bdev_iscsi.so.6.0 00:03:14.553 SYMLINK libspdk_bdev_iscsi.so 00:03:14.812 LIB libspdk_bdev_raid.a 00:03:14.812 LIB libspdk_bdev_virtio.a 00:03:14.812 SO libspdk_bdev_raid.so.6.0 00:03:14.812 SO libspdk_bdev_virtio.so.6.0 00:03:14.812 SYMLINK libspdk_bdev_raid.so 00:03:14.812 SYMLINK libspdk_bdev_virtio.so 00:03:15.754 LIB libspdk_bdev_nvme.a 00:03:16.015 SO libspdk_bdev_nvme.so.7.1 00:03:16.015 SYMLINK libspdk_bdev_nvme.so 00:03:16.615 CC module/event/subsystems/keyring/keyring.o 00:03:16.615 CC module/event/subsystems/fsdev/fsdev.o 00:03:16.615 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.615 CC module/event/subsystems/vmd/vmd.o 00:03:16.615 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.615 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.615 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.616 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.616 CC module/event/subsystems/sock/sock.o 00:03:16.616 LIB libspdk_event_keyring.a 00:03:16.616 LIB libspdk_event_fsdev.a 00:03:16.616 LIB libspdk_event_scheduler.a 00:03:16.616 LIB libspdk_event_vhost_blk.a 00:03:16.616 SO libspdk_event_keyring.so.1.0 00:03:16.616 LIB libspdk_event_sock.a 00:03:16.616 SO libspdk_event_fsdev.so.1.0 00:03:16.616 SO libspdk_event_vhost_blk.so.3.0 00:03:16.616 SO libspdk_event_scheduler.so.4.0 00:03:16.616 LIB libspdk_event_vmd.a 00:03:16.616 SO libspdk_event_sock.so.5.0 00:03:16.616 SYMLINK libspdk_event_keyring.so 00:03:16.616 SYMLINK libspdk_event_vhost_blk.so 00:03:16.616 SO libspdk_event_vmd.so.6.0 00:03:16.616 SYMLINK libspdk_event_fsdev.so 00:03:16.616 LIB libspdk_event_iobuf.a 00:03:16.616 SYMLINK libspdk_event_scheduler.so 00:03:16.616 SO libspdk_event_iobuf.so.3.0 00:03:16.616 SYMLINK libspdk_event_sock.so 00:03:16.616 SYMLINK libspdk_event_vmd.so 00:03:16.616 SYMLINK libspdk_event_iobuf.so 00:03:16.878 CC module/event/subsystems/accel/accel.o 00:03:17.141 LIB libspdk_event_accel.a 00:03:17.141 SO libspdk_event_accel.so.6.0 00:03:17.141 SYMLINK libspdk_event_accel.so 00:03:17.401 CC module/event/subsystems/bdev/bdev.o 00:03:17.660 LIB libspdk_event_bdev.a 00:03:17.660 SO libspdk_event_bdev.so.6.0 00:03:17.660 SYMLINK libspdk_event_bdev.so 00:03:17.919 CC module/event/subsystems/ublk/ublk.o 00:03:17.919 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.919 CC module/event/subsystems/scsi/scsi.o 00:03:17.919 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.919 CC module/event/subsystems/nbd/nbd.o 00:03:17.919 LIB libspdk_event_nbd.a 00:03:17.919 LIB libspdk_event_ublk.a 00:03:17.919 SO libspdk_event_nbd.so.6.0 00:03:17.919 LIB libspdk_event_scsi.a 00:03:17.919 SO libspdk_event_ublk.so.3.0 00:03:17.919 SO libspdk_event_scsi.so.6.0 00:03:17.919 LIB libspdk_event_nvmf.a 00:03:17.919 SYMLINK libspdk_event_nbd.so 00:03:17.919 SYMLINK libspdk_event_ublk.so 00:03:18.178 SYMLINK libspdk_event_scsi.so 00:03:18.178 SO libspdk_event_nvmf.so.6.0 00:03:18.178 SYMLINK libspdk_event_nvmf.so 00:03:18.178 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.178 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.438 LIB libspdk_event_vhost_scsi.a 00:03:18.438 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.438 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.438 LIB libspdk_event_iscsi.a 00:03:18.438 SO libspdk_event_iscsi.so.6.0 00:03:18.438 SYMLINK libspdk_event_iscsi.so 00:03:18.700 SO libspdk.so.6.0 00:03:18.700 SYMLINK libspdk.so 00:03:18.963 CC app/spdk_lspci/spdk_lspci.o 00:03:18.963 CC app/trace_record/trace_record.o 00:03:18.963 CXX app/trace/trace.o 00:03:18.963 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.963 CC app/nvmf_tgt/nvmf_main.o 00:03:18.963 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.963 CC examples/util/zipf/zipf.o 00:03:18.963 CC examples/ioat/perf/perf.o 00:03:18.963 CC test/thread/poller_perf/poller_perf.o 00:03:18.963 CC app/spdk_tgt/spdk_tgt.o 00:03:18.963 LINK spdk_lspci 00:03:19.225 LINK zipf 00:03:19.225 LINK interrupt_tgt 00:03:19.225 LINK nvmf_tgt 00:03:19.225 LINK spdk_trace_record 00:03:19.225 LINK poller_perf 00:03:19.225 LINK ioat_perf 00:03:19.225 LINK spdk_tgt 00:03:19.225 LINK iscsi_tgt 00:03:19.225 LINK spdk_trace 00:03:19.225 TEST_HEADER include/spdk/accel.h 00:03:19.225 TEST_HEADER include/spdk/accel_module.h 00:03:19.225 TEST_HEADER include/spdk/assert.h 00:03:19.225 TEST_HEADER include/spdk/barrier.h 00:03:19.225 TEST_HEADER include/spdk/base64.h 00:03:19.225 TEST_HEADER include/spdk/bdev.h 00:03:19.225 TEST_HEADER include/spdk/bdev_module.h 00:03:19.225 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.225 TEST_HEADER include/spdk/bit_array.h 00:03:19.487 TEST_HEADER include/spdk/bit_pool.h 00:03:19.487 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.487 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.487 TEST_HEADER include/spdk/blobfs.h 00:03:19.487 TEST_HEADER include/spdk/blob.h 00:03:19.487 TEST_HEADER include/spdk/conf.h 00:03:19.487 CC test/rpc_client/rpc_client_test.o 00:03:19.487 TEST_HEADER include/spdk/config.h 00:03:19.487 TEST_HEADER include/spdk/cpuset.h 00:03:19.487 TEST_HEADER include/spdk/crc16.h 00:03:19.487 TEST_HEADER include/spdk/crc32.h 00:03:19.487 TEST_HEADER include/spdk/crc64.h 00:03:19.487 TEST_HEADER include/spdk/dif.h 00:03:19.487 TEST_HEADER include/spdk/dma.h 00:03:19.487 TEST_HEADER include/spdk/endian.h 00:03:19.487 CC examples/ioat/verify/verify.o 00:03:19.487 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.487 TEST_HEADER include/spdk/env.h 00:03:19.487 TEST_HEADER include/spdk/event.h 00:03:19.487 TEST_HEADER include/spdk/fd_group.h 00:03:19.487 TEST_HEADER include/spdk/fd.h 00:03:19.487 TEST_HEADER include/spdk/file.h 00:03:19.487 TEST_HEADER include/spdk/fsdev.h 00:03:19.487 TEST_HEADER include/spdk/fsdev_module.h 00:03:19.487 TEST_HEADER include/spdk/ftl.h 00:03:19.487 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:19.487 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.487 TEST_HEADER include/spdk/hexlify.h 00:03:19.487 TEST_HEADER include/spdk/histogram_data.h 00:03:19.487 TEST_HEADER include/spdk/idxd.h 00:03:19.487 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.487 TEST_HEADER include/spdk/init.h 00:03:19.487 TEST_HEADER include/spdk/ioat.h 00:03:19.487 CC test/dma/test_dma/test_dma.o 00:03:19.487 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.487 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.487 TEST_HEADER include/spdk/json.h 00:03:19.487 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.487 TEST_HEADER include/spdk/keyring.h 00:03:19.487 TEST_HEADER include/spdk/keyring_module.h 00:03:19.487 TEST_HEADER include/spdk/likely.h 00:03:19.487 TEST_HEADER include/spdk/log.h 00:03:19.487 TEST_HEADER include/spdk/lvol.h 00:03:19.487 TEST_HEADER include/spdk/md5.h 00:03:19.487 TEST_HEADER include/spdk/memory.h 00:03:19.487 TEST_HEADER include/spdk/mmio.h 00:03:19.487 TEST_HEADER include/spdk/nbd.h 00:03:19.487 TEST_HEADER include/spdk/net.h 00:03:19.487 TEST_HEADER include/spdk/notify.h 00:03:19.487 TEST_HEADER include/spdk/nvme.h 00:03:19.487 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.487 CC app/spdk_nvme_perf/perf.o 00:03:19.487 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.487 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.487 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.487 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.487 CC test/app/bdev_svc/bdev_svc.o 00:03:19.487 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.487 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.487 TEST_HEADER include/spdk/nvmf.h 00:03:19.487 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.487 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.487 TEST_HEADER include/spdk/opal.h 00:03:19.487 TEST_HEADER include/spdk/opal_spec.h 00:03:19.487 TEST_HEADER include/spdk/pci_ids.h 00:03:19.487 CC test/event/event_perf/event_perf.o 00:03:19.487 TEST_HEADER include/spdk/pipe.h 00:03:19.487 TEST_HEADER include/spdk/queue.h 00:03:19.487 TEST_HEADER include/spdk/reduce.h 00:03:19.487 TEST_HEADER include/spdk/rpc.h 00:03:19.487 TEST_HEADER include/spdk/scheduler.h 00:03:19.487 TEST_HEADER include/spdk/scsi.h 00:03:19.487 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.487 TEST_HEADER include/spdk/sock.h 00:03:19.487 TEST_HEADER include/spdk/stdinc.h 00:03:19.487 TEST_HEADER include/spdk/string.h 00:03:19.487 CC test/env/vtophys/vtophys.o 00:03:19.487 CC test/event/reactor/reactor.o 00:03:19.487 TEST_HEADER include/spdk/thread.h 00:03:19.487 TEST_HEADER include/spdk/trace.h 00:03:19.487 TEST_HEADER include/spdk/trace_parser.h 00:03:19.487 TEST_HEADER include/spdk/tree.h 00:03:19.487 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.487 TEST_HEADER include/spdk/ublk.h 00:03:19.487 TEST_HEADER include/spdk/util.h 00:03:19.487 TEST_HEADER include/spdk/uuid.h 00:03:19.487 TEST_HEADER include/spdk/version.h 00:03:19.487 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.487 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.487 TEST_HEADER include/spdk/vhost.h 00:03:19.487 TEST_HEADER include/spdk/vmd.h 00:03:19.487 TEST_HEADER include/spdk/xor.h 00:03:19.487 TEST_HEADER include/spdk/zipf.h 00:03:19.487 CXX test/cpp_headers/accel.o 00:03:19.487 LINK verify 00:03:19.749 LINK rpc_client_test 00:03:19.749 LINK bdev_svc 00:03:19.749 LINK vtophys 00:03:19.749 LINK reactor 00:03:19.749 LINK event_perf 00:03:19.749 CXX test/cpp_headers/accel_module.o 00:03:19.750 CXX test/cpp_headers/assert.o 00:03:19.750 CXX test/cpp_headers/barrier.o 00:03:19.750 CC test/event/reactor_perf/reactor_perf.o 00:03:20.010 CC test/app/histogram_perf/histogram_perf.o 00:03:20.010 CXX test/cpp_headers/base64.o 00:03:20.010 LINK test_dma 00:03:20.010 CC examples/thread/thread/thread_ex.o 00:03:20.010 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.010 CC test/app/jsoncat/jsoncat.o 00:03:20.010 LINK reactor_perf 00:03:20.010 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.010 LINK mem_callbacks 00:03:20.010 LINK histogram_perf 00:03:20.010 CXX test/cpp_headers/bdev.o 00:03:20.010 LINK jsoncat 00:03:20.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.269 LINK thread 00:03:20.269 CC test/event/app_repeat/app_repeat.o 00:03:20.269 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.269 CXX test/cpp_headers/bdev_module.o 00:03:20.269 CC test/event/scheduler/scheduler.o 00:03:20.269 CC test/env/memory/memory_ut.o 00:03:20.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.269 LINK spdk_nvme_perf 00:03:20.269 LINK app_repeat 00:03:20.269 LINK nvme_fuzz 00:03:20.269 LINK env_dpdk_post_init 00:03:20.528 CXX test/cpp_headers/bdev_zone.o 00:03:20.528 CXX test/cpp_headers/bit_array.o 00:03:20.528 LINK scheduler 00:03:20.528 CXX test/cpp_headers/bit_pool.o 00:03:20.528 CC examples/sock/hello_world/hello_sock.o 00:03:20.528 CC app/spdk_nvme_identify/identify.o 00:03:20.528 CXX test/cpp_headers/blob_bdev.o 00:03:20.528 CXX test/cpp_headers/blobfs_bdev.o 00:03:20.787 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.787 CC examples/vmd/led/led.o 00:03:20.787 CC test/app/stub/stub.o 00:03:20.787 LINK vhost_fuzz 00:03:20.787 LINK hello_sock 00:03:20.787 LINK lsvmd 00:03:20.787 LINK led 00:03:20.787 CXX test/cpp_headers/blobfs.o 00:03:20.787 CXX test/cpp_headers/blob.o 00:03:20.787 CXX test/cpp_headers/conf.o 00:03:20.787 LINK stub 00:03:21.047 CC examples/idxd/perf/perf.o 00:03:21.047 CXX test/cpp_headers/config.o 00:03:21.047 CXX test/cpp_headers/cpuset.o 00:03:21.047 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.047 CC test/accel/dif/dif.o 00:03:21.047 CC test/nvme/aer/aer.o 00:03:21.047 CC test/blobfs/mkfs/mkfs.o 00:03:21.359 CC test/lvol/esnap/esnap.o 00:03:21.359 CXX test/cpp_headers/crc16.o 00:03:21.359 LINK spdk_nvme_discover 00:03:21.359 LINK idxd_perf 00:03:21.359 LINK mkfs 00:03:21.359 CXX test/cpp_headers/crc32.o 00:03:21.359 LINK spdk_nvme_identify 00:03:21.359 LINK memory_ut 00:03:21.359 LINK aer 00:03:21.618 CXX test/cpp_headers/crc64.o 00:03:21.618 CC test/nvme/reset/reset.o 00:03:21.618 CXX test/cpp_headers/dif.o 00:03:21.618 CXX test/cpp_headers/dma.o 00:03:21.618 CC app/spdk_top/spdk_top.o 00:03:21.618 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:21.618 CC test/env/pci/pci_ut.o 00:03:21.618 CC test/nvme/sgl/sgl.o 00:03:21.618 CC test/nvme/e2edp/nvme_dp.o 00:03:21.878 CXX test/cpp_headers/endian.o 00:03:21.878 LINK iscsi_fuzz 00:03:21.878 LINK reset 00:03:21.878 LINK dif 00:03:21.878 CXX test/cpp_headers/env_dpdk.o 00:03:21.878 LINK hello_fsdev 00:03:21.878 CXX test/cpp_headers/env.o 00:03:21.878 CXX test/cpp_headers/event.o 00:03:21.878 LINK nvme_dp 00:03:22.137 LINK sgl 00:03:22.137 CXX test/cpp_headers/fd_group.o 00:03:22.137 CXX test/cpp_headers/fd.o 00:03:22.137 LINK pci_ut 00:03:22.137 CC app/vhost/vhost.o 00:03:22.137 CXX test/cpp_headers/file.o 00:03:22.137 CC examples/accel/perf/accel_perf.o 00:03:22.137 CC test/nvme/overhead/overhead.o 00:03:22.137 CC test/bdev/bdevio/bdevio.o 00:03:22.397 CC app/spdk_dd/spdk_dd.o 00:03:22.397 CXX test/cpp_headers/fsdev.o 00:03:22.397 CC app/fio/nvme/fio_plugin.o 00:03:22.397 LINK vhost 00:03:22.397 CXX test/cpp_headers/fsdev_module.o 00:03:22.397 CC app/fio/bdev/fio_plugin.o 00:03:22.659 LINK overhead 00:03:22.659 CXX test/cpp_headers/ftl.o 00:03:22.659 LINK spdk_top 00:03:22.659 LINK spdk_dd 00:03:22.659 LINK bdevio 00:03:22.659 CXX test/cpp_headers/fuse_dispatcher.o 00:03:22.659 CC test/nvme/err_injection/err_injection.o 00:03:22.920 LINK accel_perf 00:03:22.920 CXX test/cpp_headers/gpt_spec.o 00:03:22.920 CC examples/nvme/hello_world/hello_world.o 00:03:22.920 CC test/nvme/startup/startup.o 00:03:22.920 CC examples/blob/hello_world/hello_blob.o 00:03:22.920 LINK spdk_nvme 00:03:22.920 CC test/nvme/reserve/reserve.o 00:03:22.920 CXX test/cpp_headers/hexlify.o 00:03:22.920 LINK err_injection 00:03:22.920 CXX test/cpp_headers/histogram_data.o 00:03:22.920 LINK spdk_bdev 00:03:23.180 LINK startup 00:03:23.181 CXX test/cpp_headers/idxd.o 00:03:23.181 LINK hello_world 00:03:23.181 CXX test/cpp_headers/idxd_spec.o 00:03:23.181 CC examples/blob/cli/blobcli.o 00:03:23.181 LINK hello_blob 00:03:23.181 LINK reserve 00:03:23.181 CC examples/nvme/reconnect/reconnect.o 00:03:23.181 CC test/nvme/simple_copy/simple_copy.o 00:03:23.181 CXX test/cpp_headers/init.o 00:03:23.181 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.441 CC test/nvme/connect_stress/connect_stress.o 00:03:23.441 CC test/nvme/boot_partition/boot_partition.o 00:03:23.441 CC test/nvme/compliance/nvme_compliance.o 00:03:23.441 CC examples/nvme/arbitration/arbitration.o 00:03:23.441 CXX test/cpp_headers/ioat.o 00:03:23.441 LINK simple_copy 00:03:23.441 LINK boot_partition 00:03:23.441 LINK connect_stress 00:03:23.441 CXX test/cpp_headers/ioat_spec.o 00:03:23.441 LINK reconnect 00:03:23.701 CXX test/cpp_headers/iscsi_spec.o 00:03:23.701 LINK blobcli 00:03:23.701 CXX test/cpp_headers/json.o 00:03:23.701 CXX test/cpp_headers/jsonrpc.o 00:03:23.701 LINK arbitration 00:03:23.701 LINK nvme_compliance 00:03:23.701 CC examples/nvme/hotplug/hotplug.o 00:03:23.701 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.701 LINK nvme_manage 00:03:23.701 CXX test/cpp_headers/keyring.o 00:03:23.960 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.960 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.960 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.960 CC examples/nvme/abort/abort.o 00:03:23.960 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.960 CXX test/cpp_headers/keyring_module.o 00:03:23.960 LINK hotplug 00:03:23.960 LINK fused_ordering 00:03:23.960 LINK cmb_copy 00:03:23.960 LINK doorbell_aers 00:03:23.960 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.220 CXX test/cpp_headers/likely.o 00:03:24.220 LINK pmr_persistence 00:03:24.220 CXX test/cpp_headers/log.o 00:03:24.220 LINK hello_bdev 00:03:24.220 CC test/nvme/fdp/fdp.o 00:03:24.220 CXX test/cpp_headers/lvol.o 00:03:24.220 CC test/nvme/cuse/cuse.o 00:03:24.220 CXX test/cpp_headers/md5.o 00:03:24.220 CXX test/cpp_headers/memory.o 00:03:24.220 CXX test/cpp_headers/mmio.o 00:03:24.220 LINK abort 00:03:24.220 CXX test/cpp_headers/nbd.o 00:03:24.220 CXX test/cpp_headers/net.o 00:03:24.220 CXX test/cpp_headers/notify.o 00:03:24.480 CXX test/cpp_headers/nvme.o 00:03:24.480 CXX test/cpp_headers/nvme_intel.o 00:03:24.480 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.480 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.480 CXX test/cpp_headers/nvme_spec.o 00:03:24.480 CXX test/cpp_headers/nvme_zns.o 00:03:24.480 LINK fdp 00:03:24.480 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.480 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.480 CXX test/cpp_headers/nvmf.o 00:03:24.741 CXX test/cpp_headers/nvmf_spec.o 00:03:24.741 CXX test/cpp_headers/nvmf_transport.o 00:03:24.741 CXX test/cpp_headers/opal.o 00:03:24.741 CXX test/cpp_headers/opal_spec.o 00:03:24.741 CXX test/cpp_headers/pci_ids.o 00:03:24.741 CXX test/cpp_headers/pipe.o 00:03:24.741 CXX test/cpp_headers/queue.o 00:03:24.741 CXX test/cpp_headers/reduce.o 00:03:24.741 CXX test/cpp_headers/rpc.o 00:03:24.741 CXX test/cpp_headers/scheduler.o 00:03:24.741 CXX test/cpp_headers/scsi.o 00:03:24.741 CXX test/cpp_headers/scsi_spec.o 00:03:24.741 CXX test/cpp_headers/sock.o 00:03:24.741 CXX test/cpp_headers/stdinc.o 00:03:25.001 CXX test/cpp_headers/string.o 00:03:25.001 LINK bdevperf 00:03:25.001 CXX test/cpp_headers/thread.o 00:03:25.001 CXX test/cpp_headers/trace.o 00:03:25.001 CXX test/cpp_headers/trace_parser.o 00:03:25.001 CXX test/cpp_headers/tree.o 00:03:25.001 CXX test/cpp_headers/ublk.o 00:03:25.001 CXX test/cpp_headers/util.o 00:03:25.001 CXX test/cpp_headers/uuid.o 00:03:25.001 CXX test/cpp_headers/version.o 00:03:25.001 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.001 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.001 CXX test/cpp_headers/vhost.o 00:03:25.001 CXX test/cpp_headers/vmd.o 00:03:25.262 CXX test/cpp_headers/xor.o 00:03:25.262 CXX test/cpp_headers/zipf.o 00:03:25.262 CC examples/nvmf/nvmf/nvmf.o 00:03:25.665 LINK cuse 00:03:25.665 LINK nvmf 00:03:27.054 LINK esnap 00:03:27.316 00:03:27.316 real 1m15.176s 00:03:27.316 user 7m0.420s 00:03:27.316 sys 1m12.286s 00:03:27.316 09:38:05 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:27.316 09:38:05 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.316 ************************************ 00:03:27.316 END TEST make 00:03:27.316 ************************************ 00:03:27.316 09:38:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.316 09:38:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.316 09:38:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.316 09:38:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.316 09:38:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.316 09:38:05 -- pm/common@44 -- $ pid=5045 00:03:27.316 09:38:05 -- pm/common@50 -- $ kill -TERM 5045 00:03:27.316 09:38:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.316 09:38:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.316 09:38:05 -- pm/common@44 -- $ pid=5046 00:03:27.316 09:38:05 -- pm/common@50 -- $ kill -TERM 5046 00:03:27.316 09:38:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:27.316 09:38:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:27.577 09:38:05 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:27.577 09:38:05 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:27.577 09:38:05 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:27.577 09:38:06 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:27.577 09:38:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.577 09:38:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.577 09:38:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.577 09:38:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.577 09:38:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.577 09:38:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.577 09:38:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.577 09:38:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.577 09:38:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.577 09:38:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.577 09:38:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.577 09:38:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:27.577 09:38:06 -- scripts/common.sh@345 -- # : 1 00:03:27.577 09:38:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.577 09:38:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.577 09:38:06 -- scripts/common.sh@365 -- # decimal 1 00:03:27.577 09:38:06 -- scripts/common.sh@353 -- # local d=1 00:03:27.577 09:38:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.577 09:38:06 -- scripts/common.sh@355 -- # echo 1 00:03:27.577 09:38:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.577 09:38:06 -- scripts/common.sh@366 -- # decimal 2 00:03:27.577 09:38:06 -- scripts/common.sh@353 -- # local d=2 00:03:27.577 09:38:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.577 09:38:06 -- scripts/common.sh@355 -- # echo 2 00:03:27.577 09:38:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.577 09:38:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.577 09:38:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.577 09:38:06 -- scripts/common.sh@368 -- # return 0 00:03:27.577 09:38:06 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.577 09:38:06 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:27.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.577 --rc genhtml_branch_coverage=1 00:03:27.577 --rc genhtml_function_coverage=1 00:03:27.577 --rc genhtml_legend=1 00:03:27.577 --rc geninfo_all_blocks=1 00:03:27.577 --rc geninfo_unexecuted_blocks=1 00:03:27.577 00:03:27.577 ' 00:03:27.577 09:38:06 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:27.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.577 --rc genhtml_branch_coverage=1 00:03:27.577 --rc genhtml_function_coverage=1 00:03:27.577 --rc genhtml_legend=1 00:03:27.577 --rc geninfo_all_blocks=1 00:03:27.577 --rc geninfo_unexecuted_blocks=1 00:03:27.577 00:03:27.577 ' 00:03:27.577 09:38:06 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:27.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.577 --rc genhtml_branch_coverage=1 00:03:27.577 --rc genhtml_function_coverage=1 00:03:27.577 --rc genhtml_legend=1 00:03:27.577 --rc geninfo_all_blocks=1 00:03:27.577 --rc geninfo_unexecuted_blocks=1 00:03:27.577 00:03:27.577 ' 00:03:27.577 09:38:06 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:27.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.577 --rc genhtml_branch_coverage=1 00:03:27.577 --rc genhtml_function_coverage=1 00:03:27.577 --rc genhtml_legend=1 00:03:27.577 --rc geninfo_all_blocks=1 00:03:27.577 --rc geninfo_unexecuted_blocks=1 00:03:27.577 00:03:27.577 ' 00:03:27.577 09:38:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:27.577 09:38:06 -- nvmf/common.sh@7 -- # uname -s 00:03:27.577 09:38:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.577 09:38:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.577 09:38:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.577 09:38:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.577 09:38:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.577 09:38:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.577 09:38:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.577 09:38:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.577 09:38:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.577 09:38:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.577 09:38:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88ccc72b-a20f-4a89-a160-d5a9e382087b 00:03:27.577 09:38:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=88ccc72b-a20f-4a89-a160-d5a9e382087b 00:03:27.577 09:38:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.577 09:38:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.577 09:38:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:27.577 09:38:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.577 09:38:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:27.577 09:38:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:27.577 09:38:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.577 09:38:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.577 09:38:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.577 09:38:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.577 09:38:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.577 09:38:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.577 09:38:06 -- paths/export.sh@5 -- # export PATH 00:03:27.577 09:38:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.577 09:38:06 -- nvmf/common.sh@51 -- # : 0 00:03:27.577 09:38:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:27.577 09:38:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:27.577 09:38:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.577 09:38:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.577 09:38:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.577 09:38:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:27.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:27.578 09:38:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:27.578 09:38:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:27.578 09:38:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:27.578 09:38:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.578 09:38:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.578 09:38:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.578 09:38:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.578 09:38:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.578 09:38:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.578 09:38:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.578 09:38:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.578 09:38:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.578 09:38:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.578 09:38:06 -- spdk/autotest.sh@48 -- # udevadm_pid=53819 00:03:27.578 09:38:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.578 09:38:06 -- pm/common@17 -- # local monitor 00:03:27.578 09:38:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.578 09:38:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.578 09:38:06 -- pm/common@25 -- # sleep 1 00:03:27.578 09:38:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.578 09:38:06 -- pm/common@21 -- # date +%s 00:03:27.578 09:38:06 -- pm/common@21 -- # date +%s 00:03:27.578 09:38:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730281086 00:03:27.578 09:38:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730281086 00:03:27.578 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730281086_collect-vmstat.pm.log 00:03:27.578 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730281086_collect-cpu-load.pm.log 00:03:28.612 09:38:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.612 09:38:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.612 09:38:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:28.612 09:38:07 -- common/autotest_common.sh@10 -- # set +x 00:03:28.612 09:38:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.612 09:38:07 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:28.612 09:38:07 -- common/autotest_common.sh@10 -- # set +x 00:03:28.612 09:38:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.612 09:38:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:28.612 09:38:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:28.612 09:38:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:28.612 09:38:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:28.612 09:38:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.612 09:38:07 -- common/autotest_common.sh@1455 -- # uname 00:03:28.612 09:38:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:28.612 09:38:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.612 09:38:07 -- common/autotest_common.sh@1475 -- # uname 00:03:28.612 09:38:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:28.612 09:38:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:28.612 09:38:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.873 lcov: LCOV version 1.15 00:03:28.873 09:38:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:43.786 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.786 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.949 09:38:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:01.949 09:38:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.949 09:38:37 -- common/autotest_common.sh@10 -- # set +x 00:04:01.949 09:38:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:01.949 09:38:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.949 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:01.949 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:01.949 09:38:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:01.949 09:38:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:01.949 09:38:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:01.949 09:38:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:01.949 09:38:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:01.949 09:38:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:01.949 09:38:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:01.949 09:38:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:01.949 09:38:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:01.949 09:38:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:01.949 09:38:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:01.949 09:38:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:01.949 09:38:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:01.949 09:38:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:01.949 09:38:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:01.949 09:38:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:01.949 09:38:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:01.949 09:38:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:01.949 09:38:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:01.949 09:38:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.949 09:38:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.949 09:38:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:01.949 09:38:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:01.949 09:38:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.949 No valid GPT data, bailing 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # pt= 00:04:01.949 09:38:38 -- scripts/common.sh@395 -- # return 1 00:04:01.949 09:38:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.949 1+0 records in 00:04:01.949 1+0 records out 00:04:01.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510437 s, 205 MB/s 00:04:01.949 09:38:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.949 09:38:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.949 09:38:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:01.949 09:38:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:01.949 09:38:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:01.949 No valid GPT data, bailing 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # pt= 00:04:01.949 09:38:38 -- scripts/common.sh@395 -- # return 1 00:04:01.949 09:38:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:01.949 1+0 records in 00:04:01.949 1+0 records out 00:04:01.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0064546 s, 162 MB/s 00:04:01.949 09:38:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.949 09:38:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.949 09:38:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:01.949 09:38:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:01.949 09:38:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:01.949 No valid GPT data, bailing 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # pt= 00:04:01.949 09:38:38 -- scripts/common.sh@395 -- # return 1 00:04:01.949 09:38:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:01.949 1+0 records in 00:04:01.949 1+0 records out 00:04:01.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626525 s, 167 MB/s 00:04:01.949 09:38:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.949 09:38:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.949 09:38:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:01.949 09:38:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:01.949 09:38:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:01.949 No valid GPT data, bailing 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:01.949 09:38:38 -- scripts/common.sh@394 -- # pt= 00:04:01.949 09:38:38 -- scripts/common.sh@395 -- # return 1 00:04:01.949 09:38:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:01.949 1+0 records in 00:04:01.949 1+0 records out 00:04:01.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541823 s, 194 MB/s 00:04:01.949 09:38:38 -- spdk/autotest.sh@105 -- # sync 00:04:01.949 09:38:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.949 09:38:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.949 09:38:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.949 09:38:40 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.949 09:38:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.949 09:38:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.949 09:38:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:02.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.210 Hugepages 00:04:02.210 node hugesize free / total 00:04:02.210 node0 1048576kB 0 / 0 00:04:02.210 node0 2048kB 0 / 0 00:04:02.210 00:04:02.210 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.210 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.472 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.472 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:02.472 09:38:40 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.472 09:38:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.472 09:38:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.472 09:38:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.044 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.304 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.304 09:38:41 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:04.242 09:38:42 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:04.242 09:38:42 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:04.242 09:38:42 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.242 09:38:42 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:04.242 09:38:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:04.242 09:38:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:04.242 09:38:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.242 09:38:42 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.243 09:38:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:04.243 09:38:42 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:04.243 09:38:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.243 09:38:42 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.504 Waiting for block devices as requested 00:04:04.764 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.764 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.764 09:38:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:04.764 09:38:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.764 09:38:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.764 09:38:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.764 09:38:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:04.764 09:38:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:04.764 09:38:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:04.764 09:38:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:04.764 09:38:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:04.764 09:38:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:04.764 09:38:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:04.764 09:38:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:04.764 09:38:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:04.764 09:38:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:04.764 09:38:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:04.764 09:38:43 -- common/autotest_common.sh@1541 -- # continue 00:04:04.765 09:38:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:04.765 09:38:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.765 09:38:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.765 09:38:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.765 09:38:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:04.765 09:38:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:04.765 09:38:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:04.765 09:38:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:04.765 09:38:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:04.765 09:38:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:04.765 09:38:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.025 09:38:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:05.025 09:38:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.025 09:38:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.025 09:38:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.025 09:38:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.025 09:38:43 -- common/autotest_common.sh@1541 -- # continue 00:04:05.025 09:38:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.025 09:38:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.025 09:38:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.025 09:38:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.025 09:38:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.025 09:38:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.025 09:38:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.597 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.859 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.859 09:38:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.859 09:38:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.859 09:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:05.859 09:38:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.859 09:38:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:05.859 09:38:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.859 09:38:44 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:05.859 09:38:44 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:05.859 09:38:44 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:05.859 09:38:44 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.859 09:38:44 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:05.859 09:38:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:05.859 09:38:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:05.859 09:38:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.859 09:38:44 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.859 09:38:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:05.859 09:38:44 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:05.859 09:38:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.859 09:38:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:05.859 09:38:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.859 09:38:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:05.859 09:38:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.859 09:38:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:05.859 09:38:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.859 09:38:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:05.859 09:38:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.859 09:38:44 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:05.859 09:38:44 -- common/autotest_common.sh@1570 -- # return 0 00:04:05.859 09:38:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:05.859 09:38:44 -- common/autotest_common.sh@1578 -- # return 0 00:04:05.859 09:38:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.859 09:38:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.859 09:38:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.859 09:38:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.859 09:38:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.859 09:38:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.859 09:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:05.859 09:38:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:05.859 09:38:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.859 09:38:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.859 09:38:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.859 09:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:05.859 ************************************ 00:04:05.859 START TEST env 00:04:05.859 ************************************ 00:04:05.859 09:38:44 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.121 * Looking for test storage... 00:04:06.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.121 09:38:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.121 09:38:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.121 09:38:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.121 09:38:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.121 09:38:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.121 09:38:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.121 09:38:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.121 09:38:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.121 09:38:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.121 09:38:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.121 09:38:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.121 09:38:44 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.121 09:38:44 env -- scripts/common.sh@345 -- # : 1 00:04:06.121 09:38:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.121 09:38:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.121 09:38:44 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.121 09:38:44 env -- scripts/common.sh@353 -- # local d=1 00:04:06.121 09:38:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.121 09:38:44 env -- scripts/common.sh@355 -- # echo 1 00:04:06.121 09:38:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.121 09:38:44 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.121 09:38:44 env -- scripts/common.sh@353 -- # local d=2 00:04:06.121 09:38:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.121 09:38:44 env -- scripts/common.sh@355 -- # echo 2 00:04:06.121 09:38:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.121 09:38:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.121 09:38:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.121 09:38:44 env -- scripts/common.sh@368 -- # return 0 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.121 --rc genhtml_branch_coverage=1 00:04:06.121 --rc genhtml_function_coverage=1 00:04:06.121 --rc genhtml_legend=1 00:04:06.121 --rc geninfo_all_blocks=1 00:04:06.121 --rc geninfo_unexecuted_blocks=1 00:04:06.121 00:04:06.121 ' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.121 --rc genhtml_branch_coverage=1 00:04:06.121 --rc genhtml_function_coverage=1 00:04:06.121 --rc genhtml_legend=1 00:04:06.121 --rc geninfo_all_blocks=1 00:04:06.121 --rc geninfo_unexecuted_blocks=1 00:04:06.121 00:04:06.121 ' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.121 --rc genhtml_branch_coverage=1 00:04:06.121 --rc genhtml_function_coverage=1 00:04:06.121 --rc genhtml_legend=1 00:04:06.121 --rc geninfo_all_blocks=1 00:04:06.121 --rc geninfo_unexecuted_blocks=1 00:04:06.121 00:04:06.121 ' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.121 --rc genhtml_branch_coverage=1 00:04:06.121 --rc genhtml_function_coverage=1 00:04:06.121 --rc genhtml_legend=1 00:04:06.121 --rc geninfo_all_blocks=1 00:04:06.121 --rc geninfo_unexecuted_blocks=1 00:04:06.121 00:04:06.121 ' 00:04:06.121 09:38:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.121 09:38:44 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.121 09:38:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.121 ************************************ 00:04:06.121 START TEST env_memory 00:04:06.121 ************************************ 00:04:06.121 09:38:44 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.121 00:04:06.121 00:04:06.121 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.121 http://cunit.sourceforge.net/ 00:04:06.121 00:04:06.121 00:04:06.121 Suite: memory 00:04:06.121 Test: alloc and free memory map ...[2024-10-30 09:38:44.625444] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.121 passed 00:04:06.121 Test: mem map translation ...[2024-10-30 09:38:44.664671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.121 [2024-10-30 09:38:44.664858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.121 [2024-10-30 09:38:44.664966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.121 [2024-10-30 09:38:44.665015] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.121 passed 00:04:06.121 Test: mem map registration ...[2024-10-30 09:38:44.733204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:06.121 [2024-10-30 09:38:44.733364] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:06.383 passed 00:04:06.383 Test: mem map adjacent registrations ...passed 00:04:06.383 00:04:06.383 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.383 suites 1 1 n/a 0 0 00:04:06.383 tests 4 4 4 0 0 00:04:06.383 asserts 152 152 152 0 n/a 00:04:06.383 00:04:06.383 Elapsed time = 0.233 seconds 00:04:06.383 00:04:06.383 real 0m0.265s 00:04:06.383 user 0m0.237s 00:04:06.383 sys 0m0.019s 00:04:06.383 09:38:44 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.383 ************************************ 00:04:06.383 END TEST env_memory 00:04:06.383 ************************************ 00:04:06.383 09:38:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.383 09:38:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.383 09:38:44 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.383 09:38:44 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.383 09:38:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.383 ************************************ 00:04:06.383 START TEST env_vtophys 00:04:06.383 ************************************ 00:04:06.383 09:38:44 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.383 EAL: lib.eal log level changed from notice to debug 00:04:06.383 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 1 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 2 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 3 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 4 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 5 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 6 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 7 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 8 as core 0 on socket 0 00:04:06.383 EAL: Detected lcore 9 as core 0 on socket 0 00:04:06.383 EAL: Maximum logical cores by configuration: 128 00:04:06.383 EAL: Detected CPU lcores: 10 00:04:06.383 EAL: Detected NUMA nodes: 1 00:04:06.383 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.383 EAL: Detected shared linkage of DPDK 00:04:06.383 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.383 EAL: Selected IOVA mode 'PA' 00:04:06.383 EAL: Probing VFIO support... 00:04:06.383 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.383 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:06.383 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.383 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.383 EAL: Setting up physically contiguous memory... 00:04:06.383 EAL: Setting maximum number of open files to 524288 00:04:06.383 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.383 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.383 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.383 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.383 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.383 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.383 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.383 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.383 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.383 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.383 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.383 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.383 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.383 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.383 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.383 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.383 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.383 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.383 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.383 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.383 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.383 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.383 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.383 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.383 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.383 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.384 EAL: Hugepages will be freed exactly as allocated. 00:04:06.384 EAL: No shared files mode enabled, IPC is disabled 00:04:06.384 EAL: No shared files mode enabled, IPC is disabled 00:04:06.645 EAL: TSC frequency is ~2600000 KHz 00:04:06.645 EAL: Main lcore 0 is ready (tid=7facd6413a40;cpuset=[0]) 00:04:06.645 EAL: Trying to obtain current memory policy. 00:04:06.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.645 EAL: Restoring previous memory policy: 0 00:04:06.645 EAL: request: mp_malloc_sync 00:04:06.645 EAL: No shared files mode enabled, IPC is disabled 00:04:06.645 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.645 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.645 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.645 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.645 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:06.645 00:04:06.645 00:04:06.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.645 http://cunit.sourceforge.net/ 00:04:06.645 00:04:06.645 00:04:06.645 Suite: components_suite 00:04:06.905 Test: vtophys_malloc_test ...passed 00:04:06.905 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.905 EAL: Restoring previous memory policy: 4 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.905 EAL: Trying to obtain current memory policy. 00:04:06.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.905 EAL: Restoring previous memory policy: 4 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.905 EAL: Trying to obtain current memory policy. 00:04:06.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.905 EAL: Restoring previous memory policy: 4 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.905 EAL: Trying to obtain current memory policy. 00:04:06.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.905 EAL: Restoring previous memory policy: 4 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.905 EAL: Trying to obtain current memory policy. 00:04:06.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.905 EAL: Restoring previous memory policy: 4 00:04:06.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.905 EAL: request: mp_malloc_sync 00:04:06.905 EAL: No shared files mode enabled, IPC is disabled 00:04:06.905 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.165 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.165 EAL: request: mp_malloc_sync 00:04:07.165 EAL: No shared files mode enabled, IPC is disabled 00:04:07.165 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.165 EAL: Trying to obtain current memory policy. 00:04:07.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.166 EAL: Restoring previous memory policy: 4 00:04:07.166 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.166 EAL: request: mp_malloc_sync 00:04:07.166 EAL: No shared files mode enabled, IPC is disabled 00:04:07.166 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.166 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.166 EAL: request: mp_malloc_sync 00:04:07.166 EAL: No shared files mode enabled, IPC is disabled 00:04:07.166 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.166 EAL: Trying to obtain current memory policy. 00:04:07.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.166 EAL: Restoring previous memory policy: 4 00:04:07.166 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.166 EAL: request: mp_malloc_sync 00:04:07.166 EAL: No shared files mode enabled, IPC is disabled 00:04:07.166 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.426 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.426 EAL: request: mp_malloc_sync 00:04:07.426 EAL: No shared files mode enabled, IPC is disabled 00:04:07.426 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.426 EAL: Trying to obtain current memory policy. 00:04:07.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.687 EAL: Restoring previous memory policy: 4 00:04:07.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.687 EAL: request: mp_malloc_sync 00:04:07.687 EAL: No shared files mode enabled, IPC is disabled 00:04:07.687 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.948 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.948 EAL: request: mp_malloc_sync 00:04:07.948 EAL: No shared files mode enabled, IPC is disabled 00:04:07.948 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.207 EAL: Trying to obtain current memory policy. 00:04:08.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.207 EAL: Restoring previous memory policy: 4 00:04:08.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.207 EAL: request: mp_malloc_sync 00:04:08.207 EAL: No shared files mode enabled, IPC is disabled 00:04:08.207 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.775 EAL: request: mp_malloc_sync 00:04:08.775 EAL: No shared files mode enabled, IPC is disabled 00:04:08.775 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.344 EAL: Trying to obtain current memory policy. 00:04:09.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.605 EAL: Restoring previous memory policy: 4 00:04:09.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.605 EAL: request: mp_malloc_sync 00:04:09.605 EAL: No shared files mode enabled, IPC is disabled 00:04:09.605 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.990 EAL: request: mp_malloc_sync 00:04:10.990 EAL: No shared files mode enabled, IPC is disabled 00:04:10.990 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:11.932 passed 00:04:11.932 00:04:11.932 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.932 suites 1 1 n/a 0 0 00:04:11.932 tests 2 2 2 0 0 00:04:11.932 asserts 5810 5810 5810 0 n/a 00:04:11.932 00:04:11.932 Elapsed time = 5.175 seconds 00:04:11.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.932 EAL: request: mp_malloc_sync 00:04:11.932 EAL: No shared files mode enabled, IPC is disabled 00:04:11.932 EAL: Heap on socket 0 was shrunk by 2MB 00:04:11.932 EAL: No shared files mode enabled, IPC is disabled 00:04:11.932 EAL: No shared files mode enabled, IPC is disabled 00:04:11.932 EAL: No shared files mode enabled, IPC is disabled 00:04:11.932 00:04:11.932 real 0m5.452s 00:04:11.932 user 0m4.616s 00:04:11.932 sys 0m0.685s 00:04:11.932 09:38:50 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.932 09:38:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:11.932 ************************************ 00:04:11.932 END TEST env_vtophys 00:04:11.932 ************************************ 00:04:11.932 09:38:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:11.932 09:38:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.932 09:38:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.932 09:38:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.932 ************************************ 00:04:11.932 START TEST env_pci 00:04:11.932 ************************************ 00:04:11.932 09:38:50 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:11.932 00:04:11.932 00:04:11.932 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.932 http://cunit.sourceforge.net/ 00:04:11.932 00:04:11.932 00:04:11.932 Suite: pci 00:04:11.932 Test: pci_hook ...[2024-10-30 09:38:50.449180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56080 has claimed it 00:04:11.932 passed 00:04:11.932 00:04:11.932 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.932 suites 1 1 n/a 0 0 00:04:11.932 tests 1 1 1 0 0 00:04:11.932 asserts 25 25 25 0 n/a 00:04:11.932 00:04:11.932 Elapsed time = 0.006 seconds 00:04:11.932 EAL: Cannot find device (10000:00:01.0) 00:04:11.932 EAL: Failed to attach device on primary process 00:04:11.932 00:04:11.932 real 0m0.064s 00:04:11.932 user 0m0.034s 00:04:11.932 sys 0m0.030s 00:04:11.932 09:38:50 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.932 09:38:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:11.932 ************************************ 00:04:11.932 END TEST env_pci 00:04:11.932 ************************************ 00:04:11.932 09:38:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:11.932 09:38:50 env -- env/env.sh@15 -- # uname 00:04:11.932 09:38:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:11.932 09:38:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:11.932 09:38:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.932 09:38:50 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:11.932 09:38:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.932 09:38:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.195 ************************************ 00:04:12.195 START TEST env_dpdk_post_init 00:04:12.195 ************************************ 00:04:12.195 09:38:50 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.195 EAL: Detected CPU lcores: 10 00:04:12.195 EAL: Detected NUMA nodes: 1 00:04:12.195 EAL: Detected shared linkage of DPDK 00:04:12.195 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.195 EAL: Selected IOVA mode 'PA' 00:04:12.195 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.195 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.195 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.195 Starting DPDK initialization... 00:04:12.195 Starting SPDK post initialization... 00:04:12.195 SPDK NVMe probe 00:04:12.195 Attaching to 0000:00:10.0 00:04:12.195 Attaching to 0000:00:11.0 00:04:12.195 Attached to 0000:00:10.0 00:04:12.195 Attached to 0000:00:11.0 00:04:12.195 Cleaning up... 00:04:12.195 00:04:12.195 real 0m0.239s 00:04:12.195 user 0m0.070s 00:04:12.195 sys 0m0.069s 00:04:12.195 09:38:50 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.195 ************************************ 00:04:12.195 END TEST env_dpdk_post_init 00:04:12.195 ************************************ 00:04:12.195 09:38:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.457 09:38:50 env -- env/env.sh@26 -- # uname 00:04:12.457 09:38:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.457 09:38:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.457 09:38:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.457 09:38:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.457 09:38:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.457 ************************************ 00:04:12.457 START TEST env_mem_callbacks 00:04:12.457 ************************************ 00:04:12.457 09:38:50 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.457 EAL: Detected CPU lcores: 10 00:04:12.457 EAL: Detected NUMA nodes: 1 00:04:12.457 EAL: Detected shared linkage of DPDK 00:04:12.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.457 EAL: Selected IOVA mode 'PA' 00:04:12.457 00:04:12.457 00:04:12.457 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.457 http://cunit.sourceforge.net/ 00:04:12.457 00:04:12.457 00:04:12.457 Suite: memory 00:04:12.457 Test: test ... 00:04:12.457 register 0x200000200000 2097152 00:04:12.457 malloc 3145728 00:04:12.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.457 register 0x200000400000 4194304 00:04:12.457 buf 0x2000004fffc0 len 3145728 PASSED 00:04:12.457 malloc 64 00:04:12.457 buf 0x2000004ffec0 len 64 PASSED 00:04:12.457 malloc 4194304 00:04:12.457 register 0x200000800000 6291456 00:04:12.457 buf 0x2000009fffc0 len 4194304 PASSED 00:04:12.457 free 0x2000004fffc0 3145728 00:04:12.457 free 0x2000004ffec0 64 00:04:12.457 unregister 0x200000400000 4194304 PASSED 00:04:12.457 free 0x2000009fffc0 4194304 00:04:12.457 unregister 0x200000800000 6291456 PASSED 00:04:12.457 malloc 8388608 00:04:12.457 register 0x200000400000 10485760 00:04:12.457 buf 0x2000005fffc0 len 8388608 PASSED 00:04:12.457 free 0x2000005fffc0 8388608 00:04:12.457 unregister 0x200000400000 10485760 PASSED 00:04:12.457 passed 00:04:12.457 00:04:12.457 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.457 suites 1 1 n/a 0 0 00:04:12.457 tests 1 1 1 0 0 00:04:12.457 asserts 15 15 15 0 n/a 00:04:12.457 00:04:12.457 Elapsed time = 0.052 seconds 00:04:12.719 00:04:12.719 real 0m0.221s 00:04:12.719 user 0m0.068s 00:04:12.719 sys 0m0.050s 00:04:12.719 ************************************ 00:04:12.719 END TEST env_mem_callbacks 00:04:12.719 ************************************ 00:04:12.719 09:38:51 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.719 09:38:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.719 00:04:12.719 real 0m6.731s 00:04:12.719 user 0m5.180s 00:04:12.719 sys 0m1.065s 00:04:12.719 09:38:51 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.719 ************************************ 00:04:12.719 END TEST env 00:04:12.719 ************************************ 00:04:12.719 09:38:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.719 09:38:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.719 09:38:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.719 09:38:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.719 09:38:51 -- common/autotest_common.sh@10 -- # set +x 00:04:12.719 ************************************ 00:04:12.719 START TEST rpc 00:04:12.719 ************************************ 00:04:12.719 09:38:51 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.719 * Looking for test storage... 00:04:12.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.719 09:38:51 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.719 09:38:51 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.719 09:38:51 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.719 09:38:51 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.719 09:38:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.719 09:38:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.719 09:38:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.719 09:38:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.719 09:38:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.719 09:38:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.719 09:38:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.719 09:38:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.719 09:38:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.719 09:38:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.719 09:38:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.719 09:38:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.719 09:38:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:12.719 09:38:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.719 09:38:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.719 09:38:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.719 09:38:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.719 09:38:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.719 09:38:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.979 09:38:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.979 09:38:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.979 09:38:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.979 09:38:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.979 09:38:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.979 09:38:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.979 09:38:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.979 09:38:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.979 09:38:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.979 --rc genhtml_branch_coverage=1 00:04:12.979 --rc genhtml_function_coverage=1 00:04:12.979 --rc genhtml_legend=1 00:04:12.979 --rc geninfo_all_blocks=1 00:04:12.979 --rc geninfo_unexecuted_blocks=1 00:04:12.979 00:04:12.979 ' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.979 --rc genhtml_branch_coverage=1 00:04:12.979 --rc genhtml_function_coverage=1 00:04:12.979 --rc genhtml_legend=1 00:04:12.979 --rc geninfo_all_blocks=1 00:04:12.979 --rc geninfo_unexecuted_blocks=1 00:04:12.979 00:04:12.979 ' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.979 --rc genhtml_branch_coverage=1 00:04:12.979 --rc genhtml_function_coverage=1 00:04:12.979 --rc genhtml_legend=1 00:04:12.979 --rc geninfo_all_blocks=1 00:04:12.979 --rc geninfo_unexecuted_blocks=1 00:04:12.979 00:04:12.979 ' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.979 --rc genhtml_branch_coverage=1 00:04:12.979 --rc genhtml_function_coverage=1 00:04:12.979 --rc genhtml_legend=1 00:04:12.979 --rc geninfo_all_blocks=1 00:04:12.979 --rc geninfo_unexecuted_blocks=1 00:04:12.979 00:04:12.979 ' 00:04:12.979 09:38:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56207 00:04:12.979 09:38:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.979 09:38:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56207 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@833 -- # '[' -z 56207 ']' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.979 09:38:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.979 09:38:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:12.979 [2024-10-30 09:38:51.416775] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:12.979 [2024-10-30 09:38:51.416905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56207 ] 00:04:12.979 [2024-10-30 09:38:51.572329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.237 [2024-10-30 09:38:51.677838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.237 [2024-10-30 09:38:51.677902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56207' to capture a snapshot of events at runtime. 00:04:13.238 [2024-10-30 09:38:51.677917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.238 [2024-10-30 09:38:51.677927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.238 [2024-10-30 09:38:51.677935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56207 for offline analysis/debug. 00:04:13.238 [2024-10-30 09:38:51.678888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.808 09:38:52 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:13.808 09:38:52 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:13.808 09:38:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.808 09:38:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.808 09:38:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.808 09:38:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.808 09:38:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.808 09:38:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.808 09:38:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.808 ************************************ 00:04:13.808 START TEST rpc_integrity 00:04:13.808 ************************************ 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.808 { 00:04:13.808 "name": "Malloc0", 00:04:13.808 "aliases": [ 00:04:13.808 "155a8879-612e-4a11-b464-2ebe9df23745" 00:04:13.808 ], 00:04:13.808 "product_name": "Malloc disk", 00:04:13.808 "block_size": 512, 00:04:13.808 "num_blocks": 16384, 00:04:13.808 "uuid": "155a8879-612e-4a11-b464-2ebe9df23745", 00:04:13.808 "assigned_rate_limits": { 00:04:13.808 "rw_ios_per_sec": 0, 00:04:13.808 "rw_mbytes_per_sec": 0, 00:04:13.808 "r_mbytes_per_sec": 0, 00:04:13.808 "w_mbytes_per_sec": 0 00:04:13.808 }, 00:04:13.808 "claimed": false, 00:04:13.808 "zoned": false, 00:04:13.808 "supported_io_types": { 00:04:13.808 "read": true, 00:04:13.808 "write": true, 00:04:13.808 "unmap": true, 00:04:13.808 "flush": true, 00:04:13.808 "reset": true, 00:04:13.808 "nvme_admin": false, 00:04:13.808 "nvme_io": false, 00:04:13.808 "nvme_io_md": false, 00:04:13.808 "write_zeroes": true, 00:04:13.808 "zcopy": true, 00:04:13.808 "get_zone_info": false, 00:04:13.808 "zone_management": false, 00:04:13.808 "zone_append": false, 00:04:13.808 "compare": false, 00:04:13.808 "compare_and_write": false, 00:04:13.808 "abort": true, 00:04:13.808 "seek_hole": false, 00:04:13.808 "seek_data": false, 00:04:13.808 "copy": true, 00:04:13.808 "nvme_iov_md": false 00:04:13.808 }, 00:04:13.808 "memory_domains": [ 00:04:13.808 { 00:04:13.808 "dma_device_id": "system", 00:04:13.808 "dma_device_type": 1 00:04:13.808 }, 00:04:13.808 { 00:04:13.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.808 "dma_device_type": 2 00:04:13.808 } 00:04:13.808 ], 00:04:13.808 "driver_specific": {} 00:04:13.808 } 00:04:13.808 ]' 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.808 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.808 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.808 [2024-10-30 09:38:52.402208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.808 [2024-10-30 09:38:52.402265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.808 [2024-10-30 09:38:52.402286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:13.809 [2024-10-30 09:38:52.402299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.809 [2024-10-30 09:38:52.404484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.809 [2024-10-30 09:38:52.404525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.809 Passthru0 00:04:13.809 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.809 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.809 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.809 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.069 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.069 { 00:04:14.069 "name": "Malloc0", 00:04:14.069 "aliases": [ 00:04:14.069 "155a8879-612e-4a11-b464-2ebe9df23745" 00:04:14.069 ], 00:04:14.069 "product_name": "Malloc disk", 00:04:14.069 "block_size": 512, 00:04:14.069 "num_blocks": 16384, 00:04:14.069 "uuid": "155a8879-612e-4a11-b464-2ebe9df23745", 00:04:14.069 "assigned_rate_limits": { 00:04:14.069 "rw_ios_per_sec": 0, 00:04:14.069 "rw_mbytes_per_sec": 0, 00:04:14.069 "r_mbytes_per_sec": 0, 00:04:14.069 "w_mbytes_per_sec": 0 00:04:14.069 }, 00:04:14.069 "claimed": true, 00:04:14.069 "claim_type": "exclusive_write", 00:04:14.069 "zoned": false, 00:04:14.069 "supported_io_types": { 00:04:14.069 "read": true, 00:04:14.069 "write": true, 00:04:14.069 "unmap": true, 00:04:14.069 "flush": true, 00:04:14.069 "reset": true, 00:04:14.069 "nvme_admin": false, 00:04:14.069 "nvme_io": false, 00:04:14.069 "nvme_io_md": false, 00:04:14.069 "write_zeroes": true, 00:04:14.069 "zcopy": true, 00:04:14.069 "get_zone_info": false, 00:04:14.069 "zone_management": false, 00:04:14.069 "zone_append": false, 00:04:14.069 "compare": false, 00:04:14.069 "compare_and_write": false, 00:04:14.069 "abort": true, 00:04:14.069 "seek_hole": false, 00:04:14.069 "seek_data": false, 00:04:14.069 "copy": true, 00:04:14.069 "nvme_iov_md": false 00:04:14.069 }, 00:04:14.069 "memory_domains": [ 00:04:14.069 { 00:04:14.069 "dma_device_id": "system", 00:04:14.069 "dma_device_type": 1 00:04:14.069 }, 00:04:14.069 { 00:04:14.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.069 "dma_device_type": 2 00:04:14.069 } 00:04:14.069 ], 00:04:14.069 "driver_specific": {} 00:04:14.069 }, 00:04:14.069 { 00:04:14.069 "name": "Passthru0", 00:04:14.069 "aliases": [ 00:04:14.069 "120dff73-215e-5e20-b19a-5773668d03d4" 00:04:14.069 ], 00:04:14.069 "product_name": "passthru", 00:04:14.069 "block_size": 512, 00:04:14.069 "num_blocks": 16384, 00:04:14.069 "uuid": "120dff73-215e-5e20-b19a-5773668d03d4", 00:04:14.069 "assigned_rate_limits": { 00:04:14.069 "rw_ios_per_sec": 0, 00:04:14.069 "rw_mbytes_per_sec": 0, 00:04:14.069 "r_mbytes_per_sec": 0, 00:04:14.069 "w_mbytes_per_sec": 0 00:04:14.069 }, 00:04:14.069 "claimed": false, 00:04:14.069 "zoned": false, 00:04:14.069 "supported_io_types": { 00:04:14.069 "read": true, 00:04:14.069 "write": true, 00:04:14.069 "unmap": true, 00:04:14.069 "flush": true, 00:04:14.069 "reset": true, 00:04:14.069 "nvme_admin": false, 00:04:14.069 "nvme_io": false, 00:04:14.069 "nvme_io_md": false, 00:04:14.069 "write_zeroes": true, 00:04:14.069 "zcopy": true, 00:04:14.069 "get_zone_info": false, 00:04:14.069 "zone_management": false, 00:04:14.069 "zone_append": false, 00:04:14.069 "compare": false, 00:04:14.069 "compare_and_write": false, 00:04:14.069 "abort": true, 00:04:14.069 "seek_hole": false, 00:04:14.069 "seek_data": false, 00:04:14.069 "copy": true, 00:04:14.069 "nvme_iov_md": false 00:04:14.069 }, 00:04:14.069 "memory_domains": [ 00:04:14.069 { 00:04:14.069 "dma_device_id": "system", 00:04:14.069 "dma_device_type": 1 00:04:14.069 }, 00:04:14.069 { 00:04:14.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.069 "dma_device_type": 2 00:04:14.069 } 00:04:14.069 ], 00:04:14.069 "driver_specific": { 00:04:14.069 "passthru": { 00:04:14.069 "name": "Passthru0", 00:04:14.069 "base_bdev_name": "Malloc0" 00:04:14.069 } 00:04:14.069 } 00:04:14.069 } 00:04:14.069 ]' 00:04:14.069 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.069 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.069 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.069 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.069 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.070 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.070 09:38:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.070 00:04:14.070 real 0m0.241s 00:04:14.070 user 0m0.131s 00:04:14.070 sys 0m0.027s 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 ************************************ 00:04:14.070 END TEST rpc_integrity 00:04:14.070 ************************************ 00:04:14.070 09:38:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.070 09:38:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.070 09:38:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.070 09:38:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 ************************************ 00:04:14.070 START TEST rpc_plugins 00:04:14.070 ************************************ 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.070 { 00:04:14.070 "name": "Malloc1", 00:04:14.070 "aliases": [ 00:04:14.070 "145a4925-fb62-4394-b6e6-e771287700e7" 00:04:14.070 ], 00:04:14.070 "product_name": "Malloc disk", 00:04:14.070 "block_size": 4096, 00:04:14.070 "num_blocks": 256, 00:04:14.070 "uuid": "145a4925-fb62-4394-b6e6-e771287700e7", 00:04:14.070 "assigned_rate_limits": { 00:04:14.070 "rw_ios_per_sec": 0, 00:04:14.070 "rw_mbytes_per_sec": 0, 00:04:14.070 "r_mbytes_per_sec": 0, 00:04:14.070 "w_mbytes_per_sec": 0 00:04:14.070 }, 00:04:14.070 "claimed": false, 00:04:14.070 "zoned": false, 00:04:14.070 "supported_io_types": { 00:04:14.070 "read": true, 00:04:14.070 "write": true, 00:04:14.070 "unmap": true, 00:04:14.070 "flush": true, 00:04:14.070 "reset": true, 00:04:14.070 "nvme_admin": false, 00:04:14.070 "nvme_io": false, 00:04:14.070 "nvme_io_md": false, 00:04:14.070 "write_zeroes": true, 00:04:14.070 "zcopy": true, 00:04:14.070 "get_zone_info": false, 00:04:14.070 "zone_management": false, 00:04:14.070 "zone_append": false, 00:04:14.070 "compare": false, 00:04:14.070 "compare_and_write": false, 00:04:14.070 "abort": true, 00:04:14.070 "seek_hole": false, 00:04:14.070 "seek_data": false, 00:04:14.070 "copy": true, 00:04:14.070 "nvme_iov_md": false 00:04:14.070 }, 00:04:14.070 "memory_domains": [ 00:04:14.070 { 00:04:14.070 "dma_device_id": "system", 00:04:14.070 "dma_device_type": 1 00:04:14.070 }, 00:04:14.070 { 00:04:14.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.070 "dma_device_type": 2 00:04:14.070 } 00:04:14.070 ], 00:04:14.070 "driver_specific": {} 00:04:14.070 } 00:04:14.070 ]' 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.070 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.070 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.331 09:38:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.331 00:04:14.331 real 0m0.119s 00:04:14.331 user 0m0.070s 00:04:14.331 sys 0m0.011s 00:04:14.331 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.331 ************************************ 00:04:14.331 END TEST rpc_plugins 00:04:14.331 ************************************ 00:04:14.331 09:38:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.331 09:38:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.331 09:38:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.331 09:38:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.331 09:38:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.331 ************************************ 00:04:14.331 START TEST rpc_trace_cmd_test 00:04:14.331 ************************************ 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.331 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56207", 00:04:14.331 "tpoint_group_mask": "0x8", 00:04:14.331 "iscsi_conn": { 00:04:14.331 "mask": "0x2", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "scsi": { 00:04:14.331 "mask": "0x4", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "bdev": { 00:04:14.331 "mask": "0x8", 00:04:14.331 "tpoint_mask": "0xffffffffffffffff" 00:04:14.331 }, 00:04:14.331 "nvmf_rdma": { 00:04:14.331 "mask": "0x10", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "nvmf_tcp": { 00:04:14.331 "mask": "0x20", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "ftl": { 00:04:14.331 "mask": "0x40", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "blobfs": { 00:04:14.331 "mask": "0x80", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "dsa": { 00:04:14.331 "mask": "0x200", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "thread": { 00:04:14.331 "mask": "0x400", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "nvme_pcie": { 00:04:14.331 "mask": "0x800", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "iaa": { 00:04:14.331 "mask": "0x1000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "nvme_tcp": { 00:04:14.331 "mask": "0x2000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "bdev_nvme": { 00:04:14.331 "mask": "0x4000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "sock": { 00:04:14.331 "mask": "0x8000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "blob": { 00:04:14.331 "mask": "0x10000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "bdev_raid": { 00:04:14.331 "mask": "0x20000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 }, 00:04:14.331 "scheduler": { 00:04:14.331 "mask": "0x40000", 00:04:14.331 "tpoint_mask": "0x0" 00:04:14.331 } 00:04:14.331 }' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.331 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.592 09:38:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.592 00:04:14.592 real 0m0.186s 00:04:14.592 user 0m0.153s 00:04:14.592 sys 0m0.024s 00:04:14.592 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.592 ************************************ 00:04:14.592 END TEST rpc_trace_cmd_test 00:04:14.592 ************************************ 00:04:14.592 09:38:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 09:38:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.592 09:38:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.592 09:38:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.592 09:38:53 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.592 09:38:53 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.592 09:38:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 ************************************ 00:04:14.592 START TEST rpc_daemon_integrity 00:04:14.592 ************************************ 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.592 { 00:04:14.592 "name": "Malloc2", 00:04:14.592 "aliases": [ 00:04:14.592 "6a564ede-ae50-4fff-ab6f-59c7e993ff97" 00:04:14.592 ], 00:04:14.592 "product_name": "Malloc disk", 00:04:14.592 "block_size": 512, 00:04:14.592 "num_blocks": 16384, 00:04:14.592 "uuid": "6a564ede-ae50-4fff-ab6f-59c7e993ff97", 00:04:14.592 "assigned_rate_limits": { 00:04:14.592 "rw_ios_per_sec": 0, 00:04:14.592 "rw_mbytes_per_sec": 0, 00:04:14.592 "r_mbytes_per_sec": 0, 00:04:14.592 "w_mbytes_per_sec": 0 00:04:14.592 }, 00:04:14.592 "claimed": false, 00:04:14.592 "zoned": false, 00:04:14.592 "supported_io_types": { 00:04:14.592 "read": true, 00:04:14.592 "write": true, 00:04:14.592 "unmap": true, 00:04:14.592 "flush": true, 00:04:14.592 "reset": true, 00:04:14.592 "nvme_admin": false, 00:04:14.592 "nvme_io": false, 00:04:14.592 "nvme_io_md": false, 00:04:14.592 "write_zeroes": true, 00:04:14.592 "zcopy": true, 00:04:14.592 "get_zone_info": false, 00:04:14.592 "zone_management": false, 00:04:14.592 "zone_append": false, 00:04:14.592 "compare": false, 00:04:14.592 "compare_and_write": false, 00:04:14.592 "abort": true, 00:04:14.592 "seek_hole": false, 00:04:14.592 "seek_data": false, 00:04:14.592 "copy": true, 00:04:14.592 "nvme_iov_md": false 00:04:14.592 }, 00:04:14.592 "memory_domains": [ 00:04:14.592 { 00:04:14.592 "dma_device_id": "system", 00:04:14.592 "dma_device_type": 1 00:04:14.592 }, 00:04:14.592 { 00:04:14.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.592 "dma_device_type": 2 00:04:14.592 } 00:04:14.592 ], 00:04:14.592 "driver_specific": {} 00:04:14.592 } 00:04:14.592 ]' 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.592 [2024-10-30 09:38:53.125697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.592 [2024-10-30 09:38:53.125750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.592 [2024-10-30 09:38:53.125769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:14.592 [2024-10-30 09:38:53.125779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.592 [2024-10-30 09:38:53.127918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.592 [2024-10-30 09:38:53.127955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.592 Passthru0 00:04:14.592 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.593 { 00:04:14.593 "name": "Malloc2", 00:04:14.593 "aliases": [ 00:04:14.593 "6a564ede-ae50-4fff-ab6f-59c7e993ff97" 00:04:14.593 ], 00:04:14.593 "product_name": "Malloc disk", 00:04:14.593 "block_size": 512, 00:04:14.593 "num_blocks": 16384, 00:04:14.593 "uuid": "6a564ede-ae50-4fff-ab6f-59c7e993ff97", 00:04:14.593 "assigned_rate_limits": { 00:04:14.593 "rw_ios_per_sec": 0, 00:04:14.593 "rw_mbytes_per_sec": 0, 00:04:14.593 "r_mbytes_per_sec": 0, 00:04:14.593 "w_mbytes_per_sec": 0 00:04:14.593 }, 00:04:14.593 "claimed": true, 00:04:14.593 "claim_type": "exclusive_write", 00:04:14.593 "zoned": false, 00:04:14.593 "supported_io_types": { 00:04:14.593 "read": true, 00:04:14.593 "write": true, 00:04:14.593 "unmap": true, 00:04:14.593 "flush": true, 00:04:14.593 "reset": true, 00:04:14.593 "nvme_admin": false, 00:04:14.593 "nvme_io": false, 00:04:14.593 "nvme_io_md": false, 00:04:14.593 "write_zeroes": true, 00:04:14.593 "zcopy": true, 00:04:14.593 "get_zone_info": false, 00:04:14.593 "zone_management": false, 00:04:14.593 "zone_append": false, 00:04:14.593 "compare": false, 00:04:14.593 "compare_and_write": false, 00:04:14.593 "abort": true, 00:04:14.593 "seek_hole": false, 00:04:14.593 "seek_data": false, 00:04:14.593 "copy": true, 00:04:14.593 "nvme_iov_md": false 00:04:14.593 }, 00:04:14.593 "memory_domains": [ 00:04:14.593 { 00:04:14.593 "dma_device_id": "system", 00:04:14.593 "dma_device_type": 1 00:04:14.593 }, 00:04:14.593 { 00:04:14.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.593 "dma_device_type": 2 00:04:14.593 } 00:04:14.593 ], 00:04:14.593 "driver_specific": {} 00:04:14.593 }, 00:04:14.593 { 00:04:14.593 "name": "Passthru0", 00:04:14.593 "aliases": [ 00:04:14.593 "6ac4e766-4cf5-5d13-b85f-132854a860f9" 00:04:14.593 ], 00:04:14.593 "product_name": "passthru", 00:04:14.593 "block_size": 512, 00:04:14.593 "num_blocks": 16384, 00:04:14.593 "uuid": "6ac4e766-4cf5-5d13-b85f-132854a860f9", 00:04:14.593 "assigned_rate_limits": { 00:04:14.593 "rw_ios_per_sec": 0, 00:04:14.593 "rw_mbytes_per_sec": 0, 00:04:14.593 "r_mbytes_per_sec": 0, 00:04:14.593 "w_mbytes_per_sec": 0 00:04:14.593 }, 00:04:14.593 "claimed": false, 00:04:14.593 "zoned": false, 00:04:14.593 "supported_io_types": { 00:04:14.593 "read": true, 00:04:14.593 "write": true, 00:04:14.593 "unmap": true, 00:04:14.593 "flush": true, 00:04:14.593 "reset": true, 00:04:14.593 "nvme_admin": false, 00:04:14.593 "nvme_io": false, 00:04:14.593 "nvme_io_md": false, 00:04:14.593 "write_zeroes": true, 00:04:14.593 "zcopy": true, 00:04:14.593 "get_zone_info": false, 00:04:14.593 "zone_management": false, 00:04:14.593 "zone_append": false, 00:04:14.593 "compare": false, 00:04:14.593 "compare_and_write": false, 00:04:14.593 "abort": true, 00:04:14.593 "seek_hole": false, 00:04:14.593 "seek_data": false, 00:04:14.593 "copy": true, 00:04:14.593 "nvme_iov_md": false 00:04:14.593 }, 00:04:14.593 "memory_domains": [ 00:04:14.593 { 00:04:14.593 "dma_device_id": "system", 00:04:14.593 "dma_device_type": 1 00:04:14.593 }, 00:04:14.593 { 00:04:14.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.593 "dma_device_type": 2 00:04:14.593 } 00:04:14.593 ], 00:04:14.593 "driver_specific": { 00:04:14.593 "passthru": { 00:04:14.593 "name": "Passthru0", 00:04:14.593 "base_bdev_name": "Malloc2" 00:04:14.593 } 00:04:14.593 } 00:04:14.593 } 00:04:14.593 ]' 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.593 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.854 00:04:14.854 real 0m0.240s 00:04:14.854 user 0m0.124s 00:04:14.854 sys 0m0.034s 00:04:14.854 ************************************ 00:04:14.854 END TEST rpc_daemon_integrity 00:04:14.854 ************************************ 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.854 09:38:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.854 09:38:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.854 09:38:53 rpc -- rpc/rpc.sh@84 -- # killprocess 56207 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@952 -- # '[' -z 56207 ']' 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@956 -- # kill -0 56207 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@957 -- # uname 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56207 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:14.854 killing process with pid 56207 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56207' 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@971 -- # kill 56207 00:04:14.854 09:38:53 rpc -- common/autotest_common.sh@976 -- # wait 56207 00:04:16.238 00:04:16.238 real 0m3.638s 00:04:16.238 user 0m4.092s 00:04:16.238 sys 0m0.581s 00:04:16.238 09:38:54 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.238 09:38:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.238 ************************************ 00:04:16.238 END TEST rpc 00:04:16.238 ************************************ 00:04:16.560 09:38:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.560 09:38:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.560 09:38:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.560 09:38:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.560 ************************************ 00:04:16.560 START TEST skip_rpc 00:04:16.560 ************************************ 00:04:16.560 09:38:54 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.560 * Looking for test storage... 00:04:16.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.560 09:38:54 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.560 09:38:54 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.560 09:38:54 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.560 09:38:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.560 --rc genhtml_branch_coverage=1 00:04:16.560 --rc genhtml_function_coverage=1 00:04:16.560 --rc genhtml_legend=1 00:04:16.560 --rc geninfo_all_blocks=1 00:04:16.560 --rc geninfo_unexecuted_blocks=1 00:04:16.560 00:04:16.560 ' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.560 --rc genhtml_branch_coverage=1 00:04:16.560 --rc genhtml_function_coverage=1 00:04:16.560 --rc genhtml_legend=1 00:04:16.560 --rc geninfo_all_blocks=1 00:04:16.560 --rc geninfo_unexecuted_blocks=1 00:04:16.560 00:04:16.560 ' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.560 --rc genhtml_branch_coverage=1 00:04:16.560 --rc genhtml_function_coverage=1 00:04:16.560 --rc genhtml_legend=1 00:04:16.560 --rc geninfo_all_blocks=1 00:04:16.560 --rc geninfo_unexecuted_blocks=1 00:04:16.560 00:04:16.560 ' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.560 --rc genhtml_branch_coverage=1 00:04:16.560 --rc genhtml_function_coverage=1 00:04:16.560 --rc genhtml_legend=1 00:04:16.560 --rc geninfo_all_blocks=1 00:04:16.560 --rc geninfo_unexecuted_blocks=1 00:04:16.560 00:04:16.560 ' 00:04:16.560 09:38:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.560 09:38:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.560 09:38:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.560 09:38:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.560 ************************************ 00:04:16.560 START TEST skip_rpc 00:04:16.560 ************************************ 00:04:16.560 09:38:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:16.560 09:38:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56419 00:04:16.560 09:38:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.560 09:38:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.560 09:38:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.560 [2024-10-30 09:38:55.147407] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:16.560 [2024-10-30 09:38:55.147530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56419 ] 00:04:16.821 [2024-10-30 09:38:55.321481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.081 [2024-10-30 09:38:55.444793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56419 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56419 ']' 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56419 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56419 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56419' 00:04:22.370 killing process with pid 56419 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56419 00:04:22.370 09:39:00 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56419 00:04:23.309 00:04:23.309 real 0m6.538s 00:04:23.309 user 0m6.147s 00:04:23.309 sys 0m0.280s 00:04:23.309 09:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.309 ************************************ 00:04:23.309 END TEST skip_rpc 00:04:23.309 ************************************ 00:04:23.309 09:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.309 09:39:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:23.309 09:39:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.309 09:39:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.309 09:39:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.309 ************************************ 00:04:23.309 START TEST skip_rpc_with_json 00:04:23.309 ************************************ 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56518 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56518 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56518 ']' 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:23.309 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.309 [2024-10-30 09:39:01.748438] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:23.309 [2024-10-30 09:39:01.748561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56518 ] 00:04:23.309 [2024-10-30 09:39:01.903866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.569 [2024-10-30 09:39:02.021899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.140 [2024-10-30 09:39:02.665457] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:24.140 request: 00:04:24.140 { 00:04:24.140 "trtype": "tcp", 00:04:24.140 "method": "nvmf_get_transports", 00:04:24.140 "req_id": 1 00:04:24.140 } 00:04:24.140 Got JSON-RPC error response 00:04:24.140 response: 00:04:24.140 { 00:04:24.140 "code": -19, 00:04:24.140 "message": "No such device" 00:04:24.140 } 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.140 [2024-10-30 09:39:02.677565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.140 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.401 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.401 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.401 { 00:04:24.401 "subsystems": [ 00:04:24.401 { 00:04:24.401 "subsystem": "fsdev", 00:04:24.401 "config": [ 00:04:24.401 { 00:04:24.401 "method": "fsdev_set_opts", 00:04:24.401 "params": { 00:04:24.401 "fsdev_io_pool_size": 65535, 00:04:24.401 "fsdev_io_cache_size": 256 00:04:24.401 } 00:04:24.401 } 00:04:24.401 ] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "keyring", 00:04:24.401 "config": [] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "iobuf", 00:04:24.401 "config": [ 00:04:24.401 { 00:04:24.401 "method": "iobuf_set_options", 00:04:24.401 "params": { 00:04:24.401 "small_pool_count": 8192, 00:04:24.401 "large_pool_count": 1024, 00:04:24.401 "small_bufsize": 8192, 00:04:24.401 "large_bufsize": 135168, 00:04:24.401 "enable_numa": false 00:04:24.401 } 00:04:24.401 } 00:04:24.401 ] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "sock", 00:04:24.401 "config": [ 00:04:24.401 { 00:04:24.401 "method": "sock_set_default_impl", 00:04:24.401 "params": { 00:04:24.401 "impl_name": "posix" 00:04:24.401 } 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "method": "sock_impl_set_options", 00:04:24.401 "params": { 00:04:24.401 "impl_name": "ssl", 00:04:24.401 "recv_buf_size": 4096, 00:04:24.401 "send_buf_size": 4096, 00:04:24.401 "enable_recv_pipe": true, 00:04:24.401 "enable_quickack": false, 00:04:24.401 "enable_placement_id": 0, 00:04:24.401 "enable_zerocopy_send_server": true, 00:04:24.401 "enable_zerocopy_send_client": false, 00:04:24.401 "zerocopy_threshold": 0, 00:04:24.401 "tls_version": 0, 00:04:24.401 "enable_ktls": false 00:04:24.401 } 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "method": "sock_impl_set_options", 00:04:24.401 "params": { 00:04:24.401 "impl_name": "posix", 00:04:24.401 "recv_buf_size": 2097152, 00:04:24.401 "send_buf_size": 2097152, 00:04:24.401 "enable_recv_pipe": true, 00:04:24.401 "enable_quickack": false, 00:04:24.401 "enable_placement_id": 0, 00:04:24.401 "enable_zerocopy_send_server": true, 00:04:24.401 "enable_zerocopy_send_client": false, 00:04:24.401 "zerocopy_threshold": 0, 00:04:24.401 "tls_version": 0, 00:04:24.401 "enable_ktls": false 00:04:24.401 } 00:04:24.401 } 00:04:24.401 ] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "vmd", 00:04:24.401 "config": [] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "accel", 00:04:24.401 "config": [ 00:04:24.401 { 00:04:24.401 "method": "accel_set_options", 00:04:24.401 "params": { 00:04:24.401 "small_cache_size": 128, 00:04:24.401 "large_cache_size": 16, 00:04:24.401 "task_count": 2048, 00:04:24.401 "sequence_count": 2048, 00:04:24.401 "buf_count": 2048 00:04:24.401 } 00:04:24.401 } 00:04:24.401 ] 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "subsystem": "bdev", 00:04:24.401 "config": [ 00:04:24.401 { 00:04:24.401 "method": "bdev_set_options", 00:04:24.401 "params": { 00:04:24.401 "bdev_io_pool_size": 65535, 00:04:24.401 "bdev_io_cache_size": 256, 00:04:24.401 "bdev_auto_examine": true, 00:04:24.401 "iobuf_small_cache_size": 128, 00:04:24.401 "iobuf_large_cache_size": 16 00:04:24.401 } 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "method": "bdev_raid_set_options", 00:04:24.401 "params": { 00:04:24.401 "process_window_size_kb": 1024, 00:04:24.401 "process_max_bandwidth_mb_sec": 0 00:04:24.401 } 00:04:24.401 }, 00:04:24.401 { 00:04:24.401 "method": "bdev_iscsi_set_options", 00:04:24.401 "params": { 00:04:24.401 "timeout_sec": 30 00:04:24.401 } 00:04:24.401 }, 00:04:24.402 { 00:04:24.402 "method": "bdev_nvme_set_options", 00:04:24.402 "params": { 00:04:24.402 "action_on_timeout": "none", 00:04:24.402 "timeout_us": 0, 00:04:24.402 "timeout_admin_us": 0, 00:04:24.402 "keep_alive_timeout_ms": 10000, 00:04:24.402 "arbitration_burst": 0, 00:04:24.402 "low_priority_weight": 0, 00:04:24.402 "medium_priority_weight": 0, 00:04:24.402 "high_priority_weight": 0, 00:04:24.402 "nvme_adminq_poll_period_us": 10000, 00:04:24.402 "nvme_ioq_poll_period_us": 0, 00:04:24.402 "io_queue_requests": 0, 00:04:24.402 "delay_cmd_submit": true, 00:04:24.402 "transport_retry_count": 4, 00:04:24.402 "bdev_retry_count": 3, 00:04:24.402 "transport_ack_timeout": 0, 00:04:24.402 "ctrlr_loss_timeout_sec": 0, 00:04:24.402 "reconnect_delay_sec": 0, 00:04:24.402 "fast_io_fail_timeout_sec": 0, 00:04:24.402 "disable_auto_failback": false, 00:04:24.402 "generate_uuids": false, 00:04:24.402 "transport_tos": 0, 00:04:24.402 "nvme_error_stat": false, 00:04:24.402 "rdma_srq_size": 0, 00:04:24.402 "io_path_stat": false, 00:04:24.402 "allow_accel_sequence": false, 00:04:24.402 "rdma_max_cq_size": 0, 00:04:24.402 "rdma_cm_event_timeout_ms": 0, 00:04:24.402 "dhchap_digests": [ 00:04:24.402 "sha256", 00:04:24.402 "sha384", 00:04:24.402 "sha512" 00:04:24.402 ], 00:04:24.402 "dhchap_dhgroups": [ 00:04:24.402 "null", 00:04:24.402 "ffdhe2048", 00:04:24.402 "ffdhe3072", 00:04:24.402 "ffdhe4096", 00:04:24.402 "ffdhe6144", 00:04:24.402 "ffdhe8192" 00:04:24.402 ] 00:04:24.402 } 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "method": "bdev_nvme_set_hotplug", 00:04:24.402 "params": { 00:04:24.402 "period_us": 100000, 00:04:24.402 "enable": false 00:04:24.402 } 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "method": "bdev_wait_for_examine" 00:04:24.402 } 00:04:24.402 ] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "scsi", 00:04:24.402 "config": null 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "scheduler", 00:04:24.402 "config": [ 00:04:24.402 { 00:04:24.402 "method": "framework_set_scheduler", 00:04:24.402 "params": { 00:04:24.402 "name": "static" 00:04:24.402 } 00:04:24.402 } 00:04:24.402 ] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "vhost_scsi", 00:04:24.402 "config": [] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "vhost_blk", 00:04:24.402 "config": [] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "ublk", 00:04:24.402 "config": [] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "nbd", 00:04:24.402 "config": [] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "nvmf", 00:04:24.402 "config": [ 00:04:24.402 { 00:04:24.402 "method": "nvmf_set_config", 00:04:24.402 "params": { 00:04:24.402 "discovery_filter": "match_any", 00:04:24.402 "admin_cmd_passthru": { 00:04:24.402 "identify_ctrlr": false 00:04:24.402 }, 00:04:24.402 "dhchap_digests": [ 00:04:24.402 "sha256", 00:04:24.402 "sha384", 00:04:24.402 "sha512" 00:04:24.402 ], 00:04:24.402 "dhchap_dhgroups": [ 00:04:24.402 "null", 00:04:24.402 "ffdhe2048", 00:04:24.402 "ffdhe3072", 00:04:24.402 "ffdhe4096", 00:04:24.402 "ffdhe6144", 00:04:24.402 "ffdhe8192" 00:04:24.402 ] 00:04:24.402 } 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "method": "nvmf_set_max_subsystems", 00:04:24.402 "params": { 00:04:24.402 "max_subsystems": 1024 00:04:24.402 } 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "method": "nvmf_set_crdt", 00:04:24.402 "params": { 00:04:24.402 "crdt1": 0, 00:04:24.402 "crdt2": 0, 00:04:24.402 "crdt3": 0 00:04:24.402 } 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "method": "nvmf_create_transport", 00:04:24.402 "params": { 00:04:24.402 "trtype": "TCP", 00:04:24.402 "max_queue_depth": 128, 00:04:24.402 "max_io_qpairs_per_ctrlr": 127, 00:04:24.402 "in_capsule_data_size": 4096, 00:04:24.402 "max_io_size": 131072, 00:04:24.402 "io_unit_size": 131072, 00:04:24.402 "max_aq_depth": 128, 00:04:24.402 "num_shared_buffers": 511, 00:04:24.402 "buf_cache_size": 4294967295, 00:04:24.402 "dif_insert_or_strip": false, 00:04:24.402 "zcopy": false, 00:04:24.402 "c2h_success": true, 00:04:24.402 "sock_priority": 0, 00:04:24.402 "abort_timeout_sec": 1, 00:04:24.402 "ack_timeout": 0, 00:04:24.402 "data_wr_pool_size": 0 00:04:24.402 } 00:04:24.402 } 00:04:24.402 ] 00:04:24.402 }, 00:04:24.402 { 00:04:24.402 "subsystem": "iscsi", 00:04:24.402 "config": [ 00:04:24.402 { 00:04:24.402 "method": "iscsi_set_options", 00:04:24.402 "params": { 00:04:24.402 "node_base": "iqn.2016-06.io.spdk", 00:04:24.402 "max_sessions": 128, 00:04:24.402 "max_connections_per_session": 2, 00:04:24.402 "max_queue_depth": 64, 00:04:24.402 "default_time2wait": 2, 00:04:24.402 "default_time2retain": 20, 00:04:24.402 "first_burst_length": 8192, 00:04:24.402 "immediate_data": true, 00:04:24.402 "allow_duplicated_isid": false, 00:04:24.402 "error_recovery_level": 0, 00:04:24.402 "nop_timeout": 60, 00:04:24.402 "nop_in_interval": 30, 00:04:24.402 "disable_chap": false, 00:04:24.402 "require_chap": false, 00:04:24.402 "mutual_chap": false, 00:04:24.402 "chap_group": 0, 00:04:24.402 "max_large_datain_per_connection": 64, 00:04:24.402 "max_r2t_per_connection": 4, 00:04:24.402 "pdu_pool_size": 36864, 00:04:24.402 "immediate_data_pool_size": 16384, 00:04:24.402 "data_out_pool_size": 2048 00:04:24.402 } 00:04:24.402 } 00:04:24.402 ] 00:04:24.402 } 00:04:24.402 ] 00:04:24.402 } 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56518 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56518 ']' 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56518 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56518 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:24.402 killing process with pid 56518 00:04:24.402 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:24.403 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56518' 00:04:24.403 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56518 00:04:24.403 09:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56518 00:04:25.785 09:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56557 00:04:25.785 09:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:25.785 09:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56557 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56557 ']' 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56557 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56557 00:04:31.071 killing process with pid 56557 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56557' 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56557 00:04:31.071 09:39:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56557 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.454 ************************************ 00:04:32.454 END TEST skip_rpc_with_json 00:04:32.454 ************************************ 00:04:32.454 00:04:32.454 real 0m9.207s 00:04:32.454 user 0m8.847s 00:04:32.454 sys 0m0.626s 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.454 09:39:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:32.454 09:39:10 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.454 09:39:10 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.454 09:39:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.454 ************************************ 00:04:32.454 START TEST skip_rpc_with_delay 00:04:32.454 ************************************ 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:32.454 09:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.454 [2024-10-30 09:39:11.020379] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.454 ************************************ 00:04:32.454 END TEST skip_rpc_with_delay 00:04:32.454 ************************************ 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.454 00:04:32.454 real 0m0.127s 00:04:32.454 user 0m0.059s 00:04:32.454 sys 0m0.067s 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.454 09:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:32.714 09:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:32.714 09:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:32.714 09:39:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:32.714 09:39:11 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.714 09:39:11 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.714 09:39:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.714 ************************************ 00:04:32.714 START TEST exit_on_failed_rpc_init 00:04:32.714 ************************************ 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56680 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56680 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 56680 ']' 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.714 09:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.714 [2024-10-30 09:39:11.212527] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:32.714 [2024-10-30 09:39:11.212641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56680 ] 00:04:32.976 [2024-10-30 09:39:11.370047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.976 [2024-10-30 09:39:11.470885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:33.546 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:33.808 [2024-10-30 09:39:12.184901] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:33.808 [2024-10-30 09:39:12.185014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56698 ] 00:04:33.808 [2024-10-30 09:39:12.375643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.069 [2024-10-30 09:39:12.476905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.069 [2024-10-30 09:39:12.476990] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.069 [2024-10-30 09:39:12.477003] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.069 [2024-10-30 09:39:12.477013] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56680 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 56680 ']' 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 56680 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56680 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.069 killing process with pid 56680 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56680' 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 56680 00:04:34.069 09:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 56680 00:04:35.982 00:04:35.982 real 0m3.028s 00:04:35.982 user 0m3.359s 00:04:35.982 sys 0m0.452s 00:04:35.982 09:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.982 ************************************ 00:04:35.982 END TEST exit_on_failed_rpc_init 00:04:35.982 ************************************ 00:04:35.982 09:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.982 09:39:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.982 00:04:35.982 real 0m19.317s 00:04:35.982 user 0m18.580s 00:04:35.982 sys 0m1.598s 00:04:35.982 09:39:14 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.982 ************************************ 00:04:35.982 END TEST skip_rpc 00:04:35.982 ************************************ 00:04:35.982 09:39:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.982 09:39:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.982 09:39:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.982 09:39:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.982 09:39:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.982 ************************************ 00:04:35.982 START TEST rpc_client 00:04:35.982 ************************************ 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.982 * Looking for test storage... 00:04:35.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.982 09:39:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.982 --rc genhtml_branch_coverage=1 00:04:35.982 --rc genhtml_function_coverage=1 00:04:35.982 --rc genhtml_legend=1 00:04:35.982 --rc geninfo_all_blocks=1 00:04:35.982 --rc geninfo_unexecuted_blocks=1 00:04:35.982 00:04:35.982 ' 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.982 --rc genhtml_branch_coverage=1 00:04:35.982 --rc genhtml_function_coverage=1 00:04:35.982 --rc genhtml_legend=1 00:04:35.982 --rc geninfo_all_blocks=1 00:04:35.982 --rc geninfo_unexecuted_blocks=1 00:04:35.982 00:04:35.982 ' 00:04:35.982 09:39:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.983 --rc genhtml_branch_coverage=1 00:04:35.983 --rc genhtml_function_coverage=1 00:04:35.983 --rc genhtml_legend=1 00:04:35.983 --rc geninfo_all_blocks=1 00:04:35.983 --rc geninfo_unexecuted_blocks=1 00:04:35.983 00:04:35.983 ' 00:04:35.983 09:39:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.983 --rc genhtml_branch_coverage=1 00:04:35.983 --rc genhtml_function_coverage=1 00:04:35.983 --rc genhtml_legend=1 00:04:35.983 --rc geninfo_all_blocks=1 00:04:35.983 --rc geninfo_unexecuted_blocks=1 00:04:35.983 00:04:35.983 ' 00:04:35.983 09:39:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:35.983 OK 00:04:35.983 09:39:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:35.983 00:04:35.983 real 0m0.196s 00:04:35.983 user 0m0.112s 00:04:35.983 sys 0m0.086s 00:04:35.983 ************************************ 00:04:35.983 END TEST rpc_client 00:04:35.983 09:39:14 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.983 09:39:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:35.983 ************************************ 00:04:35.983 09:39:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.983 09:39:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.983 09:39:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.983 09:39:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.983 ************************************ 00:04:35.983 START TEST json_config 00:04:35.983 ************************************ 00:04:35.983 09:39:14 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.983 09:39:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.983 09:39:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.983 09:39:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.244 09:39:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.244 09:39:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.244 09:39:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.244 09:39:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.244 09:39:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.244 09:39:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.244 09:39:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:36.244 09:39:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.244 09:39:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.244 09:39:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.244 09:39:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.244 09:39:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.244 09:39:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.244 09:39:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.244 09:39:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.244 09:39:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:36.244 09:39:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.244 09:39:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.244 --rc genhtml_branch_coverage=1 00:04:36.244 --rc genhtml_function_coverage=1 00:04:36.244 --rc genhtml_legend=1 00:04:36.244 --rc geninfo_all_blocks=1 00:04:36.244 --rc geninfo_unexecuted_blocks=1 00:04:36.244 00:04:36.244 ' 00:04:36.244 09:39:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88ccc72b-a20f-4a89-a160-d5a9e382087b 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=88ccc72b-a20f-4a89-a160-d5a9e382087b 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.245 09:39:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.245 09:39:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.245 09:39:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.245 09:39:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.245 09:39:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.245 09:39:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.245 09:39:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.245 09:39:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:36.245 09:39:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.245 09:39:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.245 WARNING: No tests are enabled so not running JSON configuration tests 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:36.245 09:39:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:36.245 00:04:36.245 real 0m0.144s 00:04:36.245 user 0m0.087s 00:04:36.245 sys 0m0.058s 00:04:36.245 09:39:14 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.245 09:39:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.245 ************************************ 00:04:36.245 END TEST json_config 00:04:36.245 ************************************ 00:04:36.245 09:39:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.245 09:39:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.245 09:39:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.245 09:39:14 -- common/autotest_common.sh@10 -- # set +x 00:04:36.245 ************************************ 00:04:36.245 START TEST json_config_extra_key 00:04:36.245 ************************************ 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.245 09:39:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 09:39:14 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.246 --rc geninfo_all_blocks=1 00:04:36.246 --rc geninfo_unexecuted_blocks=1 00:04:36.246 00:04:36.246 ' 00:04:36.246 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88ccc72b-a20f-4a89-a160-d5a9e382087b 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=88ccc72b-a20f-4a89-a160-d5a9e382087b 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.246 09:39:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.246 09:39:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.246 09:39:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.246 09:39:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.246 09:39:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.246 09:39:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.246 09:39:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.246 09:39:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.246 09:39:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.246 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.246 09:39:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.246 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.507 INFO: launching applications... 00:04:36.507 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.508 09:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56897 00:04:36.508 Waiting for target to run... 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56897 /var/tmp/spdk_tgt.sock 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 56897 ']' 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.508 09:39:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.508 09:39:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.508 [2024-10-30 09:39:14.940556] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:36.508 [2024-10-30 09:39:14.940827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56897 ] 00:04:36.768 [2024-10-30 09:39:15.268787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.768 [2024-10-30 09:39:15.363860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.339 09:39:15 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.340 09:39:15 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:37.340 00:04:37.340 INFO: shutting down applications... 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.340 09:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.340 09:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56897 ]] 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56897 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56897 00:04:37.340 09:39:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.912 09:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.912 09:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.912 09:39:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56897 00:04:37.912 09:39:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.514 09:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.514 09:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.514 09:39:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56897 00:04:38.514 09:39:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.776 09:39:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.776 09:39:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.776 09:39:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56897 00:04:38.776 09:39:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.349 SPDK target shutdown done 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56897 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.349 09:39:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.349 Success 00:04:39.349 09:39:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:39.349 ************************************ 00:04:39.349 END TEST json_config_extra_key 00:04:39.349 ************************************ 00:04:39.349 00:04:39.349 real 0m3.156s 00:04:39.349 user 0m2.772s 00:04:39.349 sys 0m0.399s 00:04:39.349 09:39:17 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.349 09:39:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 09:39:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.349 09:39:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.349 09:39:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.349 09:39:17 -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 ************************************ 00:04:39.349 START TEST alias_rpc 00:04:39.349 ************************************ 00:04:39.349 09:39:17 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.610 * Looking for test storage... 00:04:39.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:39.610 09:39:18 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.610 09:39:18 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.610 09:39:18 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.610 09:39:18 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.610 09:39:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.610 09:39:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.610 09:39:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.611 09:39:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.611 --rc genhtml_branch_coverage=1 00:04:39.611 --rc genhtml_function_coverage=1 00:04:39.611 --rc genhtml_legend=1 00:04:39.611 --rc geninfo_all_blocks=1 00:04:39.611 --rc geninfo_unexecuted_blocks=1 00:04:39.611 00:04:39.611 ' 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.611 --rc genhtml_branch_coverage=1 00:04:39.611 --rc genhtml_function_coverage=1 00:04:39.611 --rc genhtml_legend=1 00:04:39.611 --rc geninfo_all_blocks=1 00:04:39.611 --rc geninfo_unexecuted_blocks=1 00:04:39.611 00:04:39.611 ' 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.611 --rc genhtml_branch_coverage=1 00:04:39.611 --rc genhtml_function_coverage=1 00:04:39.611 --rc genhtml_legend=1 00:04:39.611 --rc geninfo_all_blocks=1 00:04:39.611 --rc geninfo_unexecuted_blocks=1 00:04:39.611 00:04:39.611 ' 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.611 --rc genhtml_branch_coverage=1 00:04:39.611 --rc genhtml_function_coverage=1 00:04:39.611 --rc genhtml_legend=1 00:04:39.611 --rc geninfo_all_blocks=1 00:04:39.611 --rc geninfo_unexecuted_blocks=1 00:04:39.611 00:04:39.611 ' 00:04:39.611 09:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:39.611 09:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56990 00:04:39.611 09:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56990 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 56990 ']' 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.611 09:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.611 09:39:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.611 [2024-10-30 09:39:18.154986] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:39.611 [2024-10-30 09:39:18.155127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56990 ] 00:04:39.873 [2024-10-30 09:39:18.312666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.873 [2024-10-30 09:39:18.411342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.445 09:39:18 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.445 09:39:18 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:40.445 09:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:40.706 09:39:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56990 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 56990 ']' 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 56990 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56990 00:04:40.706 killing process with pid 56990 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56990' 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@971 -- # kill 56990 00:04:40.706 09:39:19 alias_rpc -- common/autotest_common.sh@976 -- # wait 56990 00:04:42.619 ************************************ 00:04:42.619 END TEST alias_rpc 00:04:42.619 ************************************ 00:04:42.619 00:04:42.619 real 0m2.840s 00:04:42.619 user 0m2.937s 00:04:42.619 sys 0m0.411s 00:04:42.619 09:39:20 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.619 09:39:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.619 09:39:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:42.619 09:39:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.619 09:39:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.619 09:39:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.619 09:39:20 -- common/autotest_common.sh@10 -- # set +x 00:04:42.619 ************************************ 00:04:42.619 START TEST spdkcli_tcp 00:04:42.619 ************************************ 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.619 * Looking for test storage... 00:04:42.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.619 09:39:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.619 --rc genhtml_branch_coverage=1 00:04:42.619 --rc genhtml_function_coverage=1 00:04:42.619 --rc genhtml_legend=1 00:04:42.619 --rc geninfo_all_blocks=1 00:04:42.619 --rc geninfo_unexecuted_blocks=1 00:04:42.619 00:04:42.619 ' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.619 --rc genhtml_branch_coverage=1 00:04:42.619 --rc genhtml_function_coverage=1 00:04:42.619 --rc genhtml_legend=1 00:04:42.619 --rc geninfo_all_blocks=1 00:04:42.619 --rc geninfo_unexecuted_blocks=1 00:04:42.619 00:04:42.619 ' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.619 --rc genhtml_branch_coverage=1 00:04:42.619 --rc genhtml_function_coverage=1 00:04:42.619 --rc genhtml_legend=1 00:04:42.619 --rc geninfo_all_blocks=1 00:04:42.619 --rc geninfo_unexecuted_blocks=1 00:04:42.619 00:04:42.619 ' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.619 --rc genhtml_branch_coverage=1 00:04:42.619 --rc genhtml_function_coverage=1 00:04:42.619 --rc genhtml_legend=1 00:04:42.619 --rc geninfo_all_blocks=1 00:04:42.619 --rc geninfo_unexecuted_blocks=1 00:04:42.619 00:04:42.619 ' 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57086 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57086 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57086 ']' 00:04:42.619 09:39:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.619 09:39:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.619 [2024-10-30 09:39:21.069347] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:42.619 [2024-10-30 09:39:21.069626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57086 ] 00:04:42.619 [2024-10-30 09:39:21.230544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.881 [2024-10-30 09:39:21.331874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.881 [2024-10-30 09:39:21.331974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.453 09:39:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.453 09:39:21 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:43.453 09:39:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57102 00:04:43.453 09:39:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.453 09:39:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.715 [ 00:04:43.715 "bdev_malloc_delete", 00:04:43.715 "bdev_malloc_create", 00:04:43.715 "bdev_null_resize", 00:04:43.715 "bdev_null_delete", 00:04:43.715 "bdev_null_create", 00:04:43.715 "bdev_nvme_cuse_unregister", 00:04:43.715 "bdev_nvme_cuse_register", 00:04:43.715 "bdev_opal_new_user", 00:04:43.715 "bdev_opal_set_lock_state", 00:04:43.715 "bdev_opal_delete", 00:04:43.715 "bdev_opal_get_info", 00:04:43.715 "bdev_opal_create", 00:04:43.715 "bdev_nvme_opal_revert", 00:04:43.715 "bdev_nvme_opal_init", 00:04:43.715 "bdev_nvme_send_cmd", 00:04:43.715 "bdev_nvme_set_keys", 00:04:43.715 "bdev_nvme_get_path_iostat", 00:04:43.715 "bdev_nvme_get_mdns_discovery_info", 00:04:43.715 "bdev_nvme_stop_mdns_discovery", 00:04:43.715 "bdev_nvme_start_mdns_discovery", 00:04:43.715 "bdev_nvme_set_multipath_policy", 00:04:43.715 "bdev_nvme_set_preferred_path", 00:04:43.715 "bdev_nvme_get_io_paths", 00:04:43.715 "bdev_nvme_remove_error_injection", 00:04:43.715 "bdev_nvme_add_error_injection", 00:04:43.715 "bdev_nvme_get_discovery_info", 00:04:43.715 "bdev_nvme_stop_discovery", 00:04:43.715 "bdev_nvme_start_discovery", 00:04:43.715 "bdev_nvme_get_controller_health_info", 00:04:43.715 "bdev_nvme_disable_controller", 00:04:43.715 "bdev_nvme_enable_controller", 00:04:43.715 "bdev_nvme_reset_controller", 00:04:43.715 "bdev_nvme_get_transport_statistics", 00:04:43.715 "bdev_nvme_apply_firmware", 00:04:43.715 "bdev_nvme_detach_controller", 00:04:43.715 "bdev_nvme_get_controllers", 00:04:43.715 "bdev_nvme_attach_controller", 00:04:43.715 "bdev_nvme_set_hotplug", 00:04:43.715 "bdev_nvme_set_options", 00:04:43.715 "bdev_passthru_delete", 00:04:43.715 "bdev_passthru_create", 00:04:43.715 "bdev_lvol_set_parent_bdev", 00:04:43.715 "bdev_lvol_set_parent", 00:04:43.715 "bdev_lvol_check_shallow_copy", 00:04:43.715 "bdev_lvol_start_shallow_copy", 00:04:43.715 "bdev_lvol_grow_lvstore", 00:04:43.715 "bdev_lvol_get_lvols", 00:04:43.715 "bdev_lvol_get_lvstores", 00:04:43.715 "bdev_lvol_delete", 00:04:43.715 "bdev_lvol_set_read_only", 00:04:43.715 "bdev_lvol_resize", 00:04:43.715 "bdev_lvol_decouple_parent", 00:04:43.715 "bdev_lvol_inflate", 00:04:43.715 "bdev_lvol_rename", 00:04:43.715 "bdev_lvol_clone_bdev", 00:04:43.715 "bdev_lvol_clone", 00:04:43.715 "bdev_lvol_snapshot", 00:04:43.715 "bdev_lvol_create", 00:04:43.715 "bdev_lvol_delete_lvstore", 00:04:43.715 "bdev_lvol_rename_lvstore", 00:04:43.715 "bdev_lvol_create_lvstore", 00:04:43.715 "bdev_raid_set_options", 00:04:43.715 "bdev_raid_remove_base_bdev", 00:04:43.715 "bdev_raid_add_base_bdev", 00:04:43.715 "bdev_raid_delete", 00:04:43.715 "bdev_raid_create", 00:04:43.715 "bdev_raid_get_bdevs", 00:04:43.715 "bdev_error_inject_error", 00:04:43.715 "bdev_error_delete", 00:04:43.715 "bdev_error_create", 00:04:43.715 "bdev_split_delete", 00:04:43.715 "bdev_split_create", 00:04:43.715 "bdev_delay_delete", 00:04:43.715 "bdev_delay_create", 00:04:43.715 "bdev_delay_update_latency", 00:04:43.715 "bdev_zone_block_delete", 00:04:43.715 "bdev_zone_block_create", 00:04:43.715 "blobfs_create", 00:04:43.715 "blobfs_detect", 00:04:43.715 "blobfs_set_cache_size", 00:04:43.715 "bdev_aio_delete", 00:04:43.715 "bdev_aio_rescan", 00:04:43.715 "bdev_aio_create", 00:04:43.715 "bdev_ftl_set_property", 00:04:43.715 "bdev_ftl_get_properties", 00:04:43.715 "bdev_ftl_get_stats", 00:04:43.715 "bdev_ftl_unmap", 00:04:43.715 "bdev_ftl_unload", 00:04:43.715 "bdev_ftl_delete", 00:04:43.715 "bdev_ftl_load", 00:04:43.715 "bdev_ftl_create", 00:04:43.715 "bdev_virtio_attach_controller", 00:04:43.715 "bdev_virtio_scsi_get_devices", 00:04:43.715 "bdev_virtio_detach_controller", 00:04:43.715 "bdev_virtio_blk_set_hotplug", 00:04:43.715 "bdev_iscsi_delete", 00:04:43.715 "bdev_iscsi_create", 00:04:43.715 "bdev_iscsi_set_options", 00:04:43.715 "accel_error_inject_error", 00:04:43.715 "ioat_scan_accel_module", 00:04:43.715 "dsa_scan_accel_module", 00:04:43.715 "iaa_scan_accel_module", 00:04:43.715 "keyring_file_remove_key", 00:04:43.715 "keyring_file_add_key", 00:04:43.715 "keyring_linux_set_options", 00:04:43.715 "fsdev_aio_delete", 00:04:43.715 "fsdev_aio_create", 00:04:43.715 "iscsi_get_histogram", 00:04:43.715 "iscsi_enable_histogram", 00:04:43.715 "iscsi_set_options", 00:04:43.715 "iscsi_get_auth_groups", 00:04:43.715 "iscsi_auth_group_remove_secret", 00:04:43.715 "iscsi_auth_group_add_secret", 00:04:43.715 "iscsi_delete_auth_group", 00:04:43.715 "iscsi_create_auth_group", 00:04:43.715 "iscsi_set_discovery_auth", 00:04:43.715 "iscsi_get_options", 00:04:43.715 "iscsi_target_node_request_logout", 00:04:43.715 "iscsi_target_node_set_redirect", 00:04:43.715 "iscsi_target_node_set_auth", 00:04:43.715 "iscsi_target_node_add_lun", 00:04:43.715 "iscsi_get_stats", 00:04:43.715 "iscsi_get_connections", 00:04:43.715 "iscsi_portal_group_set_auth", 00:04:43.716 "iscsi_start_portal_group", 00:04:43.716 "iscsi_delete_portal_group", 00:04:43.716 "iscsi_create_portal_group", 00:04:43.716 "iscsi_get_portal_groups", 00:04:43.716 "iscsi_delete_target_node", 00:04:43.716 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.716 "iscsi_target_node_add_pg_ig_maps", 00:04:43.716 "iscsi_create_target_node", 00:04:43.716 "iscsi_get_target_nodes", 00:04:43.716 "iscsi_delete_initiator_group", 00:04:43.716 "iscsi_initiator_group_remove_initiators", 00:04:43.716 "iscsi_initiator_group_add_initiators", 00:04:43.716 "iscsi_create_initiator_group", 00:04:43.716 "iscsi_get_initiator_groups", 00:04:43.716 "nvmf_set_crdt", 00:04:43.716 "nvmf_set_config", 00:04:43.716 "nvmf_set_max_subsystems", 00:04:43.716 "nvmf_stop_mdns_prr", 00:04:43.716 "nvmf_publish_mdns_prr", 00:04:43.716 "nvmf_subsystem_get_listeners", 00:04:43.716 "nvmf_subsystem_get_qpairs", 00:04:43.716 "nvmf_subsystem_get_controllers", 00:04:43.716 "nvmf_get_stats", 00:04:43.716 "nvmf_get_transports", 00:04:43.716 "nvmf_create_transport", 00:04:43.716 "nvmf_get_targets", 00:04:43.716 "nvmf_delete_target", 00:04:43.716 "nvmf_create_target", 00:04:43.716 "nvmf_subsystem_allow_any_host", 00:04:43.716 "nvmf_subsystem_set_keys", 00:04:43.716 "nvmf_subsystem_remove_host", 00:04:43.716 "nvmf_subsystem_add_host", 00:04:43.716 "nvmf_ns_remove_host", 00:04:43.716 "nvmf_ns_add_host", 00:04:43.716 "nvmf_subsystem_remove_ns", 00:04:43.716 "nvmf_subsystem_set_ns_ana_group", 00:04:43.716 "nvmf_subsystem_add_ns", 00:04:43.716 "nvmf_subsystem_listener_set_ana_state", 00:04:43.716 "nvmf_discovery_get_referrals", 00:04:43.716 "nvmf_discovery_remove_referral", 00:04:43.716 "nvmf_discovery_add_referral", 00:04:43.716 "nvmf_subsystem_remove_listener", 00:04:43.716 "nvmf_subsystem_add_listener", 00:04:43.716 "nvmf_delete_subsystem", 00:04:43.716 "nvmf_create_subsystem", 00:04:43.716 "nvmf_get_subsystems", 00:04:43.716 "env_dpdk_get_mem_stats", 00:04:43.716 "nbd_get_disks", 00:04:43.716 "nbd_stop_disk", 00:04:43.716 "nbd_start_disk", 00:04:43.716 "ublk_recover_disk", 00:04:43.716 "ublk_get_disks", 00:04:43.716 "ublk_stop_disk", 00:04:43.716 "ublk_start_disk", 00:04:43.716 "ublk_destroy_target", 00:04:43.716 "ublk_create_target", 00:04:43.716 "virtio_blk_create_transport", 00:04:43.716 "virtio_blk_get_transports", 00:04:43.716 "vhost_controller_set_coalescing", 00:04:43.716 "vhost_get_controllers", 00:04:43.716 "vhost_delete_controller", 00:04:43.716 "vhost_create_blk_controller", 00:04:43.716 "vhost_scsi_controller_remove_target", 00:04:43.716 "vhost_scsi_controller_add_target", 00:04:43.716 "vhost_start_scsi_controller", 00:04:43.716 "vhost_create_scsi_controller", 00:04:43.716 "thread_set_cpumask", 00:04:43.716 "scheduler_set_options", 00:04:43.716 "framework_get_governor", 00:04:43.716 "framework_get_scheduler", 00:04:43.716 "framework_set_scheduler", 00:04:43.716 "framework_get_reactors", 00:04:43.716 "thread_get_io_channels", 00:04:43.716 "thread_get_pollers", 00:04:43.716 "thread_get_stats", 00:04:43.716 "framework_monitor_context_switch", 00:04:43.716 "spdk_kill_instance", 00:04:43.716 "log_enable_timestamps", 00:04:43.716 "log_get_flags", 00:04:43.716 "log_clear_flag", 00:04:43.716 "log_set_flag", 00:04:43.716 "log_get_level", 00:04:43.716 "log_set_level", 00:04:43.716 "log_get_print_level", 00:04:43.716 "log_set_print_level", 00:04:43.716 "framework_enable_cpumask_locks", 00:04:43.716 "framework_disable_cpumask_locks", 00:04:43.716 "framework_wait_init", 00:04:43.716 "framework_start_init", 00:04:43.716 "scsi_get_devices", 00:04:43.716 "bdev_get_histogram", 00:04:43.716 "bdev_enable_histogram", 00:04:43.716 "bdev_set_qos_limit", 00:04:43.716 "bdev_set_qd_sampling_period", 00:04:43.716 "bdev_get_bdevs", 00:04:43.716 "bdev_reset_iostat", 00:04:43.716 "bdev_get_iostat", 00:04:43.716 "bdev_examine", 00:04:43.716 "bdev_wait_for_examine", 00:04:43.716 "bdev_set_options", 00:04:43.716 "accel_get_stats", 00:04:43.716 "accel_set_options", 00:04:43.716 "accel_set_driver", 00:04:43.716 "accel_crypto_key_destroy", 00:04:43.716 "accel_crypto_keys_get", 00:04:43.716 "accel_crypto_key_create", 00:04:43.716 "accel_assign_opc", 00:04:43.716 "accel_get_module_info", 00:04:43.716 "accel_get_opc_assignments", 00:04:43.716 "vmd_rescan", 00:04:43.716 "vmd_remove_device", 00:04:43.716 "vmd_enable", 00:04:43.716 "sock_get_default_impl", 00:04:43.716 "sock_set_default_impl", 00:04:43.716 "sock_impl_set_options", 00:04:43.716 "sock_impl_get_options", 00:04:43.716 "iobuf_get_stats", 00:04:43.716 "iobuf_set_options", 00:04:43.716 "keyring_get_keys", 00:04:43.716 "framework_get_pci_devices", 00:04:43.716 "framework_get_config", 00:04:43.716 "framework_get_subsystems", 00:04:43.716 "fsdev_set_opts", 00:04:43.716 "fsdev_get_opts", 00:04:43.716 "trace_get_info", 00:04:43.716 "trace_get_tpoint_group_mask", 00:04:43.716 "trace_disable_tpoint_group", 00:04:43.716 "trace_enable_tpoint_group", 00:04:43.716 "trace_clear_tpoint_mask", 00:04:43.716 "trace_set_tpoint_mask", 00:04:43.716 "notify_get_notifications", 00:04:43.716 "notify_get_types", 00:04:43.716 "spdk_get_version", 00:04:43.716 "rpc_get_methods" 00:04:43.716 ] 00:04:43.716 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.716 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.716 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57086 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57086 ']' 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57086 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57086 00:04:43.716 killing process with pid 57086 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57086' 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57086 00:04:43.716 09:39:22 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57086 00:04:45.632 ************************************ 00:04:45.632 END TEST spdkcli_tcp 00:04:45.632 ************************************ 00:04:45.632 00:04:45.632 real 0m2.897s 00:04:45.632 user 0m5.223s 00:04:45.632 sys 0m0.437s 00:04:45.632 09:39:23 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.632 09:39:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.632 09:39:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.632 09:39:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.632 09:39:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.632 09:39:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.632 ************************************ 00:04:45.632 START TEST dpdk_mem_utility 00:04:45.632 ************************************ 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.632 * Looking for test storage... 00:04:45.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.632 09:39:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.632 --rc genhtml_branch_coverage=1 00:04:45.632 --rc genhtml_function_coverage=1 00:04:45.632 --rc genhtml_legend=1 00:04:45.632 --rc geninfo_all_blocks=1 00:04:45.632 --rc geninfo_unexecuted_blocks=1 00:04:45.632 00:04:45.632 ' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.632 --rc genhtml_branch_coverage=1 00:04:45.632 --rc genhtml_function_coverage=1 00:04:45.632 --rc genhtml_legend=1 00:04:45.632 --rc geninfo_all_blocks=1 00:04:45.632 --rc geninfo_unexecuted_blocks=1 00:04:45.632 00:04:45.632 ' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.632 --rc genhtml_branch_coverage=1 00:04:45.632 --rc genhtml_function_coverage=1 00:04:45.632 --rc genhtml_legend=1 00:04:45.632 --rc geninfo_all_blocks=1 00:04:45.632 --rc geninfo_unexecuted_blocks=1 00:04:45.632 00:04:45.632 ' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.632 --rc genhtml_branch_coverage=1 00:04:45.632 --rc genhtml_function_coverage=1 00:04:45.632 --rc genhtml_legend=1 00:04:45.632 --rc geninfo_all_blocks=1 00:04:45.632 --rc geninfo_unexecuted_blocks=1 00:04:45.632 00:04:45.632 ' 00:04:45.632 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.632 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57191 00:04:45.632 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57191 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57191 ']' 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.632 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.632 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.632 [2024-10-30 09:39:24.025492] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:45.632 [2024-10-30 09:39:24.025747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57191 ] 00:04:45.632 [2024-10-30 09:39:24.177728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.921 [2024-10-30 09:39:24.280437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.497 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.497 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:46.497 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.497 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.497 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.497 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.497 { 00:04:46.497 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.497 } 00:04:46.497 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.497 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.497 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:46.497 1 heaps totaling size 816.000000 MiB 00:04:46.497 size: 816.000000 MiB heap id: 0 00:04:46.497 end heaps---------- 00:04:46.497 9 mempools totaling size 595.772034 MiB 00:04:46.497 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.497 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.497 size: 92.545471 MiB name: bdev_io_57191 00:04:46.497 size: 50.003479 MiB name: msgpool_57191 00:04:46.497 size: 36.509338 MiB name: fsdev_io_57191 00:04:46.497 size: 21.763794 MiB name: PDU_Pool 00:04:46.497 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.497 size: 4.133484 MiB name: evtpool_57191 00:04:46.497 size: 0.026123 MiB name: Session_Pool 00:04:46.497 end mempools------- 00:04:46.497 6 memzones totaling size 4.142822 MiB 00:04:46.497 size: 1.000366 MiB name: RG_ring_0_57191 00:04:46.497 size: 1.000366 MiB name: RG_ring_1_57191 00:04:46.497 size: 1.000366 MiB name: RG_ring_4_57191 00:04:46.497 size: 1.000366 MiB name: RG_ring_5_57191 00:04:46.497 size: 0.125366 MiB name: RG_ring_2_57191 00:04:46.497 size: 0.015991 MiB name: RG_ring_3_57191 00:04:46.497 end memzones------- 00:04:46.497 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.497 heap id: 0 total size: 816.000000 MiB number of busy elements: 327 number of free elements: 18 00:04:46.497 list of free elements. size: 16.788452 MiB 00:04:46.497 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:46.497 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:46.497 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:46.497 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:46.497 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:46.497 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:46.497 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:46.497 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:46.497 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:46.497 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:46.497 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:46.498 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:04:46.498 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:46.498 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:46.498 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:46.498 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:46.498 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:46.498 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:46.498 list of standard malloc elements. size: 199.290649 MiB 00:04:46.498 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:46.498 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:46.498 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:46.498 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:46.498 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:46.498 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:46.498 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:46.498 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:46.498 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:46.498 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:46.498 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:46.498 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:46.498 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:46.499 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:46.499 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:46.499 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:46.500 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:46.500 list of memzone associated elements. size: 599.920898 MiB 00:04:46.500 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:46.500 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.500 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:46.500 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.500 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:46.500 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57191_0 00:04:46.500 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:46.500 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57191_0 00:04:46.500 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:46.500 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57191_0 00:04:46.500 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:46.500 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.500 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:46.500 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.500 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:46.500 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57191_0 00:04:46.500 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:46.500 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57191 00:04:46.500 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:46.500 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57191 00:04:46.500 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:46.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.500 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:46.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.500 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:46.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.500 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:46.500 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.500 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:46.500 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57191 00:04:46.500 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:46.500 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57191 00:04:46.500 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:46.500 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57191 00:04:46.500 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:46.500 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57191 00:04:46.500 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:46.500 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57191 00:04:46.500 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:46.500 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57191 00:04:46.500 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:46.500 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.500 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:46.500 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.500 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:46.500 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.500 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:46.500 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57191 00:04:46.500 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:46.500 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57191 00:04:46.500 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:46.500 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.500 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:46.500 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.500 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:46.500 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57191 00:04:46.500 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:46.500 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.500 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:46.500 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57191 00:04:46.500 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:46.500 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57191 00:04:46.500 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:46.500 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57191 00:04:46.500 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:46.500 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.500 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.500 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57191 00:04:46.500 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57191 ']' 00:04:46.500 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57191 00:04:46.500 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57191 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.500 killing process with pid 57191 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57191' 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57191 00:04:46.500 09:39:25 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57191 00:04:48.411 ************************************ 00:04:48.411 END TEST dpdk_mem_utility 00:04:48.411 ************************************ 00:04:48.411 00:04:48.411 real 0m2.719s 00:04:48.411 user 0m2.700s 00:04:48.411 sys 0m0.433s 00:04:48.411 09:39:26 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.411 09:39:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.411 09:39:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.411 09:39:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.411 09:39:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.411 09:39:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.411 ************************************ 00:04:48.411 START TEST event 00:04:48.411 ************************************ 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.411 * Looking for test storage... 00:04:48.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.411 09:39:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.411 09:39:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.411 09:39:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.411 09:39:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.411 09:39:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.411 09:39:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.411 09:39:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.411 09:39:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.411 09:39:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.411 09:39:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.411 09:39:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.411 09:39:26 event -- scripts/common.sh@344 -- # case "$op" in 00:04:48.411 09:39:26 event -- scripts/common.sh@345 -- # : 1 00:04:48.411 09:39:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.411 09:39:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.411 09:39:26 event -- scripts/common.sh@365 -- # decimal 1 00:04:48.411 09:39:26 event -- scripts/common.sh@353 -- # local d=1 00:04:48.411 09:39:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.411 09:39:26 event -- scripts/common.sh@355 -- # echo 1 00:04:48.411 09:39:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.411 09:39:26 event -- scripts/common.sh@366 -- # decimal 2 00:04:48.411 09:39:26 event -- scripts/common.sh@353 -- # local d=2 00:04:48.411 09:39:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.411 09:39:26 event -- scripts/common.sh@355 -- # echo 2 00:04:48.411 09:39:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.411 09:39:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.411 09:39:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.411 09:39:26 event -- scripts/common.sh@368 -- # return 0 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.411 --rc genhtml_branch_coverage=1 00:04:48.411 --rc genhtml_function_coverage=1 00:04:48.411 --rc genhtml_legend=1 00:04:48.411 --rc geninfo_all_blocks=1 00:04:48.411 --rc geninfo_unexecuted_blocks=1 00:04:48.411 00:04:48.411 ' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.411 --rc genhtml_branch_coverage=1 00:04:48.411 --rc genhtml_function_coverage=1 00:04:48.411 --rc genhtml_legend=1 00:04:48.411 --rc geninfo_all_blocks=1 00:04:48.411 --rc geninfo_unexecuted_blocks=1 00:04:48.411 00:04:48.411 ' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.411 --rc genhtml_branch_coverage=1 00:04:48.411 --rc genhtml_function_coverage=1 00:04:48.411 --rc genhtml_legend=1 00:04:48.411 --rc geninfo_all_blocks=1 00:04:48.411 --rc geninfo_unexecuted_blocks=1 00:04:48.411 00:04:48.411 ' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.411 --rc genhtml_branch_coverage=1 00:04:48.411 --rc genhtml_function_coverage=1 00:04:48.411 --rc genhtml_legend=1 00:04:48.411 --rc geninfo_all_blocks=1 00:04:48.411 --rc geninfo_unexecuted_blocks=1 00:04:48.411 00:04:48.411 ' 00:04:48.411 09:39:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.411 09:39:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.411 09:39:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:48.411 09:39:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.411 09:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.411 ************************************ 00:04:48.411 START TEST event_perf 00:04:48.411 ************************************ 00:04:48.411 09:39:26 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.411 Running I/O for 1 seconds...[2024-10-30 09:39:26.783396] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:48.411 [2024-10-30 09:39:26.783576] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57283 ] 00:04:48.411 [2024-10-30 09:39:26.943225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.674 [2024-10-30 09:39:27.047983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.674 [2024-10-30 09:39:27.048310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.674 Running I/O for 1 seconds...[2024-10-30 09:39:27.048804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.674 [2024-10-30 09:39:27.048821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.619 00:04:49.619 lcore 0: 194374 00:04:49.619 lcore 1: 194375 00:04:49.619 lcore 2: 194372 00:04:49.619 lcore 3: 194372 00:04:49.619 done. 00:04:49.619 00:04:49.619 real 0m1.472s 00:04:49.619 user 0m4.263s 00:04:49.619 sys 0m0.085s 00:04:49.619 09:39:28 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.619 ************************************ 00:04:49.619 09:39:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.619 END TEST event_perf 00:04:49.619 ************************************ 00:04:49.882 09:39:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.882 09:39:28 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:49.882 09:39:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.882 09:39:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.882 ************************************ 00:04:49.882 START TEST event_reactor 00:04:49.882 ************************************ 00:04:49.882 09:39:28 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.882 [2024-10-30 09:39:28.319509] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:49.882 [2024-10-30 09:39:28.319750] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57328 ] 00:04:49.882 [2024-10-30 09:39:28.478561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.143 [2024-10-30 09:39:28.584281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.528 test_start 00:04:51.528 oneshot 00:04:51.528 tick 100 00:04:51.528 tick 100 00:04:51.528 tick 250 00:04:51.528 tick 100 00:04:51.528 tick 100 00:04:51.528 tick 250 00:04:51.528 tick 100 00:04:51.528 tick 500 00:04:51.528 tick 100 00:04:51.528 tick 100 00:04:51.528 tick 250 00:04:51.528 tick 100 00:04:51.528 tick 100 00:04:51.528 test_end 00:04:51.528 00:04:51.528 real 0m1.452s 00:04:51.528 user 0m1.279s 00:04:51.528 sys 0m0.063s 00:04:51.528 09:39:29 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.528 09:39:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.528 ************************************ 00:04:51.528 END TEST event_reactor 00:04:51.528 ************************************ 00:04:51.528 09:39:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.528 09:39:29 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:51.528 09:39:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.528 09:39:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.528 ************************************ 00:04:51.528 START TEST event_reactor_perf 00:04:51.528 ************************************ 00:04:51.528 09:39:29 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.528 [2024-10-30 09:39:29.840258] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:51.528 [2024-10-30 09:39:29.840378] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57359 ] 00:04:51.528 [2024-10-30 09:39:30.002274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.528 [2024-10-30 09:39:30.103557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.910 test_start 00:04:52.910 test_end 00:04:52.910 Performance: 317311 events per second 00:04:52.910 ************************************ 00:04:52.910 END TEST event_reactor_perf 00:04:52.910 ************************************ 00:04:52.910 00:04:52.910 real 0m1.444s 00:04:52.910 user 0m1.274s 00:04:52.910 sys 0m0.062s 00:04:52.910 09:39:31 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.910 09:39:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.910 09:39:31 event -- event/event.sh@49 -- # uname -s 00:04:52.910 09:39:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.910 09:39:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.910 09:39:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.910 09:39:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.910 09:39:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.910 ************************************ 00:04:52.910 START TEST event_scheduler 00:04:52.910 ************************************ 00:04:52.910 09:39:31 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.910 * Looking for test storage... 00:04:52.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.910 09:39:31 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.910 09:39:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.910 09:39:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.910 09:39:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.910 09:39:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.911 09:39:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.911 --rc genhtml_branch_coverage=1 00:04:52.911 --rc genhtml_function_coverage=1 00:04:52.911 --rc genhtml_legend=1 00:04:52.911 --rc geninfo_all_blocks=1 00:04:52.911 --rc geninfo_unexecuted_blocks=1 00:04:52.911 00:04:52.911 ' 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.911 --rc genhtml_branch_coverage=1 00:04:52.911 --rc genhtml_function_coverage=1 00:04:52.911 --rc genhtml_legend=1 00:04:52.911 --rc geninfo_all_blocks=1 00:04:52.911 --rc geninfo_unexecuted_blocks=1 00:04:52.911 00:04:52.911 ' 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.911 --rc genhtml_branch_coverage=1 00:04:52.911 --rc genhtml_function_coverage=1 00:04:52.911 --rc genhtml_legend=1 00:04:52.911 --rc geninfo_all_blocks=1 00:04:52.911 --rc geninfo_unexecuted_blocks=1 00:04:52.911 00:04:52.911 ' 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.911 --rc genhtml_branch_coverage=1 00:04:52.911 --rc genhtml_function_coverage=1 00:04:52.911 --rc genhtml_legend=1 00:04:52.911 --rc geninfo_all_blocks=1 00:04:52.911 --rc geninfo_unexecuted_blocks=1 00:04:52.911 00:04:52.911 ' 00:04:52.911 09:39:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.911 09:39:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57435 00:04:52.911 09:39:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.911 09:39:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.911 09:39:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57435 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 57435 ']' 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.911 09:39:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.171 [2024-10-30 09:39:31.532023] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:53.171 [2024-10-30 09:39:31.532322] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57435 ] 00:04:53.171 [2024-10-30 09:39:31.696111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.430 [2024-10-30 09:39:31.805860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.430 [2024-10-30 09:39:31.806357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.430 [2024-10-30 09:39:31.806988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.430 [2024-10-30 09:39:31.807018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.002 09:39:32 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.002 09:39:32 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:54.002 09:39:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.002 09:39:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.002 09:39:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.002 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.002 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.002 POWER: Cannot set governor of lcore 0 to performance 00:04:54.002 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.002 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.002 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.002 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.002 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:54.002 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:54.003 POWER: Unable to set Power Management Environment for lcore 0 00:04:54.003 [2024-10-30 09:39:32.381373] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:54.003 [2024-10-30 09:39:32.381451] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:54.003 [2024-10-30 09:39:32.381464] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.003 [2024-10-30 09:39:32.381484] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.003 [2024-10-30 09:39:32.381492] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.003 [2024-10-30 09:39:32.381501] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.003 09:39:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.003 09:39:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.003 09:39:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.003 09:39:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 [2024-10-30 09:39:32.629300] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.265 09:39:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.265 09:39:32 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.265 09:39:32 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 ************************************ 00:04:54.265 START TEST scheduler_create_thread 00:04:54.265 ************************************ 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 2 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 3 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 4 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 5 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 6 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 7 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 8 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 9 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 10 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.265 09:39:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.651 09:39:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.651 09:39:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.651 09:39:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.651 09:39:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.651 09:39:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.036 ************************************ 00:04:57.036 END TEST scheduler_create_thread 00:04:57.036 ************************************ 00:04:57.036 09:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.036 00:04:57.036 real 0m2.616s 00:04:57.036 user 0m0.014s 00:04:57.036 sys 0m0.007s 00:04:57.036 09:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.036 09:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.036 09:39:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.036 09:39:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57435 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 57435 ']' 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 57435 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57435 00:04:57.036 killing process with pid 57435 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57435' 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 57435 00:04:57.036 09:39:35 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 57435 00:04:57.298 [2024-10-30 09:39:35.740666] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.241 ************************************ 00:04:58.241 END TEST event_scheduler 00:04:58.241 ************************************ 00:04:58.241 00:04:58.241 real 0m5.161s 00:04:58.241 user 0m8.986s 00:04:58.241 sys 0m0.363s 00:04:58.241 09:39:36 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.241 09:39:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.241 09:39:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:58.241 09:39:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:58.241 09:39:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.241 09:39:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.241 09:39:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.241 ************************************ 00:04:58.241 START TEST app_repeat 00:04:58.241 ************************************ 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57541 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57541' 00:04:58.241 Process app_repeat pid: 57541 00:04:58.241 spdk_app_start Round 0 00:04:58.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.241 09:39:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57541 /var/tmp/spdk-nbd.sock 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57541 ']' 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.241 09:39:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.241 [2024-10-30 09:39:36.598854] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:04:58.241 [2024-10-30 09:39:36.598970] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57541 ] 00:04:58.241 [2024-10-30 09:39:36.758571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.502 [2024-10-30 09:39:36.862407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.502 [2024-10-30 09:39:36.862520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.130 09:39:37 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.130 09:39:37 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:59.130 09:39:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.130 Malloc0 00:04:59.130 09:39:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.389 Malloc1 00:04:59.651 09:39:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.651 /dev/nbd0 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.651 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:59.651 09:39:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.651 1+0 records in 00:04:59.651 1+0 records out 00:04:59.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454377 s, 9.0 MB/s 00:04:59.652 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.652 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:59.652 09:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.652 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:59.652 09:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:59.652 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.652 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.652 09:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.913 /dev/nbd1 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.913 1+0 records in 00:04:59.913 1+0 records out 00:04:59.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021719 s, 18.9 MB/s 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:59.913 09:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.913 09:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.485 { 00:05:00.485 "nbd_device": "/dev/nbd0", 00:05:00.485 "bdev_name": "Malloc0" 00:05:00.485 }, 00:05:00.485 { 00:05:00.485 "nbd_device": "/dev/nbd1", 00:05:00.485 "bdev_name": "Malloc1" 00:05:00.485 } 00:05:00.485 ]' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.485 { 00:05:00.485 "nbd_device": "/dev/nbd0", 00:05:00.485 "bdev_name": "Malloc0" 00:05:00.485 }, 00:05:00.485 { 00:05:00.485 "nbd_device": "/dev/nbd1", 00:05:00.485 "bdev_name": "Malloc1" 00:05:00.485 } 00:05:00.485 ]' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.485 /dev/nbd1' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.485 /dev/nbd1' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.485 256+0 records in 00:05:00.485 256+0 records out 00:05:00.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755356 s, 139 MB/s 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.485 256+0 records in 00:05:00.485 256+0 records out 00:05:00.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196826 s, 53.3 MB/s 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.485 256+0 records in 00:05:00.485 256+0 records out 00:05:00.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212799 s, 49.3 MB/s 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.485 09:39:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.746 09:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.008 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.269 09:39:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.269 09:39:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.531 09:39:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.197 [2024-10-30 09:39:40.727785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.458 [2024-10-30 09:39:40.826137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.458 [2024-10-30 09:39:40.826173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.458 [2024-10-30 09:39:40.950961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.458 [2024-10-30 09:39:40.951028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.002 spdk_app_start Round 1 00:05:05.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.002 09:39:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.002 09:39:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.002 09:39:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57541 /var/tmp/spdk-nbd.sock 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57541 ']' 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.002 09:39:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:05.002 09:39:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.002 Malloc0 00:05:05.002 09:39:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.262 Malloc1 00:05:05.262 09:39:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.262 09:39:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.521 /dev/nbd0 00:05:05.521 09:39:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.521 09:39:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.521 1+0 records in 00:05:05.521 1+0 records out 00:05:05.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523062 s, 7.8 MB/s 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:05.521 09:39:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:05.521 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.521 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.521 09:39:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.779 /dev/nbd1 00:05:05.779 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.780 1+0 records in 00:05:05.780 1+0 records out 00:05:05.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519153 s, 7.9 MB/s 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:05.780 09:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.780 09:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.041 { 00:05:06.041 "nbd_device": "/dev/nbd0", 00:05:06.041 "bdev_name": "Malloc0" 00:05:06.041 }, 00:05:06.041 { 00:05:06.041 "nbd_device": "/dev/nbd1", 00:05:06.041 "bdev_name": "Malloc1" 00:05:06.041 } 00:05:06.041 ]' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.041 { 00:05:06.041 "nbd_device": "/dev/nbd0", 00:05:06.041 "bdev_name": "Malloc0" 00:05:06.041 }, 00:05:06.041 { 00:05:06.041 "nbd_device": "/dev/nbd1", 00:05:06.041 "bdev_name": "Malloc1" 00:05:06.041 } 00:05:06.041 ]' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.041 /dev/nbd1' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.041 /dev/nbd1' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.041 256+0 records in 00:05:06.041 256+0 records out 00:05:06.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618837 s, 169 MB/s 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.041 256+0 records in 00:05:06.041 256+0 records out 00:05:06.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183153 s, 57.3 MB/s 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.041 256+0 records in 00:05:06.041 256+0 records out 00:05:06.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215023 s, 48.8 MB/s 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.041 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.042 09:39:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.308 09:39:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.570 09:39:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.570 09:39:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.570 09:39:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.570 09:39:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.571 09:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.571 09:39:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.571 09:39:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.140 09:39:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.710 [2024-10-30 09:39:46.234546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.971 [2024-10-30 09:39:46.335163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.971 [2024-10-30 09:39:46.335217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.971 [2024-10-30 09:39:46.465511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.971 [2024-10-30 09:39:46.465558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.993 spdk_app_start Round 2 00:05:09.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.993 09:39:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.993 09:39:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.993 09:39:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57541 /var/tmp/spdk-nbd.sock 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57541 ']' 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.993 09:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.253 09:39:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.253 09:39:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:10.253 09:39:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.514 Malloc0 00:05:10.514 09:39:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.775 Malloc1 00:05:10.775 09:39:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.775 09:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.036 /dev/nbd0 00:05:11.036 09:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.036 09:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.036 1+0 records in 00:05:11.036 1+0 records out 00:05:11.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379161 s, 10.8 MB/s 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.036 09:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.037 09:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.037 09:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.037 09:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.037 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.037 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.037 09:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.295 /dev/nbd1 00:05:11.295 09:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.295 09:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.295 09:39:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.296 1+0 records in 00:05:11.296 1+0 records out 00:05:11.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503248 s, 8.1 MB/s 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.296 09:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd0", 00:05:11.296 "bdev_name": "Malloc0" 00:05:11.296 }, 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd1", 00:05:11.296 "bdev_name": "Malloc1" 00:05:11.296 } 00:05:11.296 ]' 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd0", 00:05:11.296 "bdev_name": "Malloc0" 00:05:11.296 }, 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd1", 00:05:11.296 "bdev_name": "Malloc1" 00:05:11.296 } 00:05:11.296 ]' 00:05:11.296 09:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.556 /dev/nbd1' 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.556 /dev/nbd1' 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.556 09:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.557 256+0 records in 00:05:11.557 256+0 records out 00:05:11.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00768477 s, 136 MB/s 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.557 256+0 records in 00:05:11.557 256+0 records out 00:05:11.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020691 s, 50.7 MB/s 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.557 09:39:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.557 256+0 records in 00:05:11.557 256+0 records out 00:05:11.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275269 s, 38.1 MB/s 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.557 09:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.817 09:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.076 09:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.336 09:39:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.336 09:39:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.596 09:39:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.168 [2024-10-30 09:39:51.768155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.429 [2024-10-30 09:39:51.860205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.429 [2024-10-30 09:39:51.860498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.429 [2024-10-30 09:39:51.984834] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.429 [2024-10-30 09:39:51.984904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.970 09:39:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57541 /var/tmp/spdk-nbd.sock 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57541 ']' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:15.970 09:39:54 event.app_repeat -- event/event.sh@39 -- # killprocess 57541 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 57541 ']' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 57541 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57541 00:05:15.970 killing process with pid 57541 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57541' 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 57541 00:05:15.970 09:39:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 57541 00:05:16.541 spdk_app_start is called in Round 0. 00:05:16.541 Shutdown signal received, stop current app iteration 00:05:16.541 Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 reinitialization... 00:05:16.541 spdk_app_start is called in Round 1. 00:05:16.541 Shutdown signal received, stop current app iteration 00:05:16.541 Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 reinitialization... 00:05:16.541 spdk_app_start is called in Round 2. 00:05:16.541 Shutdown signal received, stop current app iteration 00:05:16.541 Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 reinitialization... 00:05:16.541 spdk_app_start is called in Round 3. 00:05:16.541 Shutdown signal received, stop current app iteration 00:05:16.541 ************************************ 00:05:16.541 END TEST app_repeat 00:05:16.541 ************************************ 00:05:16.541 09:39:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.541 09:39:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.541 00:05:16.541 real 0m18.394s 00:05:16.541 user 0m40.266s 00:05:16.541 sys 0m2.180s 00:05:16.541 09:39:54 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.541 09:39:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.541 09:39:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.541 09:39:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.541 09:39:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.541 09:39:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.541 09:39:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.541 ************************************ 00:05:16.541 START TEST cpu_locks 00:05:16.541 ************************************ 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.541 * Looking for test storage... 00:05:16.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.541 09:39:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.541 --rc genhtml_branch_coverage=1 00:05:16.541 --rc genhtml_function_coverage=1 00:05:16.541 --rc genhtml_legend=1 00:05:16.541 --rc geninfo_all_blocks=1 00:05:16.541 --rc geninfo_unexecuted_blocks=1 00:05:16.541 00:05:16.541 ' 00:05:16.541 09:39:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.541 --rc genhtml_branch_coverage=1 00:05:16.541 --rc genhtml_function_coverage=1 00:05:16.541 --rc genhtml_legend=1 00:05:16.541 --rc geninfo_all_blocks=1 00:05:16.542 --rc geninfo_unexecuted_blocks=1 00:05:16.542 00:05:16.542 ' 00:05:16.542 09:39:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.542 --rc genhtml_branch_coverage=1 00:05:16.542 --rc genhtml_function_coverage=1 00:05:16.542 --rc genhtml_legend=1 00:05:16.542 --rc geninfo_all_blocks=1 00:05:16.542 --rc geninfo_unexecuted_blocks=1 00:05:16.542 00:05:16.542 ' 00:05:16.542 09:39:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.542 --rc genhtml_branch_coverage=1 00:05:16.542 --rc genhtml_function_coverage=1 00:05:16.542 --rc genhtml_legend=1 00:05:16.542 --rc geninfo_all_blocks=1 00:05:16.542 --rc geninfo_unexecuted_blocks=1 00:05:16.542 00:05:16.542 ' 00:05:16.542 09:39:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.542 09:39:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.542 09:39:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.542 09:39:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.542 09:39:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.542 09:39:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.542 09:39:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.803 ************************************ 00:05:16.803 START TEST default_locks 00:05:16.803 ************************************ 00:05:16.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57977 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57977 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57977 ']' 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.803 09:39:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.803 [2024-10-30 09:39:55.280933] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:16.803 [2024-10-30 09:39:55.281144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57977 ] 00:05:17.064 [2024-10-30 09:39:55.459840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.064 [2024-10-30 09:39:55.562264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.638 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.638 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:17.638 09:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57977 00:05:17.638 09:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.638 09:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57977 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57977 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 57977 ']' 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 57977 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57977 00:05:17.899 killing process with pid 57977 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57977' 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 57977 00:05:17.899 09:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 57977 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57977 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57977 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 57977 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57977 ']' 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.290 ERROR: process (pid: 57977) is no longer running 00:05:19.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (57977) - No such process 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.290 ************************************ 00:05:19.290 END TEST default_locks 00:05:19.290 ************************************ 00:05:19.290 00:05:19.290 real 0m2.703s 00:05:19.290 user 0m2.670s 00:05:19.290 sys 0m0.476s 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.290 09:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.552 09:39:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.552 09:39:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.552 09:39:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.552 09:39:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.552 ************************************ 00:05:19.552 START TEST default_locks_via_rpc 00:05:19.552 ************************************ 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58041 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58041 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58041 ']' 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.552 09:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.552 [2024-10-30 09:39:58.012437] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:19.552 [2024-10-30 09:39:58.012739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58041 ] 00:05:19.813 [2024-10-30 09:39:58.174270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.813 [2024-10-30 09:39:58.289351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58041 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58041 00:05:20.386 09:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58041 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58041 ']' 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58041 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58041 00:05:20.646 killing process with pid 58041 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58041' 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58041 00:05:20.646 09:39:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58041 00:05:22.561 ************************************ 00:05:22.561 END TEST default_locks_via_rpc 00:05:22.561 ************************************ 00:05:22.561 00:05:22.561 real 0m2.732s 00:05:22.561 user 0m2.711s 00:05:22.561 sys 0m0.478s 00:05:22.561 09:40:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.561 09:40:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.561 09:40:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.561 09:40:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.561 09:40:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.561 09:40:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.561 ************************************ 00:05:22.561 START TEST non_locking_app_on_locked_coremask 00:05:22.561 ************************************ 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58093 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58093 /var/tmp/spdk.sock 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58093 ']' 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.561 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.562 09:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.562 [2024-10-30 09:40:00.800873] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:22.562 [2024-10-30 09:40:00.800994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:05:22.562 [2024-10-30 09:40:00.954002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.562 [2024-10-30 09:40:01.055087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58109 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58109 /var/tmp/spdk2.sock 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58109 ']' 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.132 09:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.132 [2024-10-30 09:40:01.720202] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:23.132 [2024-10-30 09:40:01.720496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58109 ] 00:05:23.391 [2024-10-30 09:40:01.899950] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.391 [2024-10-30 09:40:01.900014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.651 [2024-10-30 09:40:02.106558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58093 ']' 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.040 killing process with pid 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58093' 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58093 00:05:25.040 09:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58093 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58109 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58109 ']' 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58109 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58109 00:05:28.340 killing process with pid 58109 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58109' 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58109 00:05:28.340 09:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58109 00:05:29.728 ************************************ 00:05:29.728 END TEST non_locking_app_on_locked_coremask 00:05:29.728 ************************************ 00:05:29.728 00:05:29.728 real 0m7.385s 00:05:29.728 user 0m7.632s 00:05:29.728 sys 0m0.837s 00:05:29.728 09:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.728 09:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.728 09:40:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.728 09:40:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.728 09:40:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.728 09:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.728 ************************************ 00:05:29.728 START TEST locking_app_on_unlocked_coremask 00:05:29.728 ************************************ 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:29.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58217 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58217 /var/tmp/spdk.sock 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58217 ']' 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.728 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.729 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.729 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.729 09:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.729 [2024-10-30 09:40:08.250235] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:29.729 [2024-10-30 09:40:08.250351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:05:29.988 [2024-10-30 09:40:08.407431] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.988 [2024-10-30 09:40:08.407641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.988 [2024-10-30 09:40:08.510829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.584 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58233 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58233 /var/tmp/spdk2.sock 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58233 ']' 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.585 09:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.585 [2024-10-30 09:40:09.181151] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:30.585 [2024-10-30 09:40:09.181446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:05:30.846 [2024-10-30 09:40:09.362036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.107 [2024-10-30 09:40:09.569288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58233 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58233 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58217 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58217 ']' 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58217 00:05:32.491 09:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58217 00:05:32.491 killing process with pid 58217 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58217' 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58217 00:05:32.491 09:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58217 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58233 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58233 ']' 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58233 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58233 00:05:35.792 killing process with pid 58233 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58233' 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58233 00:05:35.792 09:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58233 00:05:37.243 ************************************ 00:05:37.243 END TEST locking_app_on_unlocked_coremask 00:05:37.243 ************************************ 00:05:37.243 00:05:37.243 real 0m7.425s 00:05:37.243 user 0m7.661s 00:05:37.243 sys 0m0.833s 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.243 09:40:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:37.243 09:40:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.243 09:40:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.243 09:40:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.243 ************************************ 00:05:37.243 START TEST locking_app_on_locked_coremask 00:05:37.243 ************************************ 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58341 00:05:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58341 /var/tmp/spdk.sock 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58341 ']' 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.243 09:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.243 [2024-10-30 09:40:15.734146] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:37.243 [2024-10-30 09:40:15.734267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58341 ] 00:05:37.505 [2024-10-30 09:40:15.886091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.505 [2024-10-30 09:40:15.988967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58352 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58352 /var/tmp/spdk2.sock 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58352 /var/tmp/spdk2.sock 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58352 /var/tmp/spdk2.sock 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58352 ']' 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.072 09:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.072 [2024-10-30 09:40:16.660603] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:38.072 [2024-10-30 09:40:16.660711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58352 ] 00:05:38.333 [2024-10-30 09:40:16.835197] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58341 has claimed it. 00:05:38.333 [2024-10-30 09:40:16.835259] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.905 ERROR: process (pid: 58352) is no longer running 00:05:38.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58352) - No such process 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58341 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58341 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58341 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58341 ']' 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58341 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.905 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58341 00:05:39.165 killing process with pid 58341 00:05:39.165 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.165 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.165 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58341' 00:05:39.165 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58341 00:05:39.165 09:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58341 00:05:40.566 ************************************ 00:05:40.566 END TEST locking_app_on_locked_coremask 00:05:40.566 ************************************ 00:05:40.566 00:05:40.566 real 0m3.380s 00:05:40.566 user 0m3.597s 00:05:40.566 sys 0m0.570s 00:05:40.566 09:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.566 09:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.566 09:40:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:40.566 09:40:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.566 09:40:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.566 09:40:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.566 ************************************ 00:05:40.566 START TEST locking_overlapped_coremask 00:05:40.566 ************************************ 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:40.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58410 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58410 /var/tmp/spdk.sock 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58410 ']' 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.566 09:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:40.857 [2024-10-30 09:40:19.176469] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:40.857 [2024-10-30 09:40:19.176746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58410 ] 00:05:40.857 [2024-10-30 09:40:19.334763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.857 [2024-10-30 09:40:19.440205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.857 [2024-10-30 09:40:19.440565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.857 [2024-10-30 09:40:19.440595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58428 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58428 /var/tmp/spdk2.sock 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58428 /var/tmp/spdk2.sock 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58428 /var/tmp/spdk2.sock 00:05:41.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58428 ']' 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.429 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.689 [2024-10-30 09:40:20.106805] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:41.689 [2024-10-30 09:40:20.106927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58428 ] 00:05:41.689 [2024-10-30 09:40:20.278897] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58410 has claimed it. 00:05:41.689 [2024-10-30 09:40:20.278960] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.262 ERROR: process (pid: 58428) is no longer running 00:05:42.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58428) - No such process 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58410 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58410 ']' 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58410 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58410 00:05:42.262 killing process with pid 58410 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58410' 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58410 00:05:42.262 09:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58410 00:05:44.177 00:05:44.177 real 0m3.182s 00:05:44.177 user 0m8.648s 00:05:44.177 sys 0m0.435s 00:05:44.177 09:40:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.177 09:40:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 ************************************ 00:05:44.177 END TEST locking_overlapped_coremask 00:05:44.177 ************************************ 00:05:44.178 09:40:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.178 09:40:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.178 09:40:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.178 09:40:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.178 ************************************ 00:05:44.178 START TEST locking_overlapped_coremask_via_rpc 00:05:44.178 ************************************ 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58487 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58487 /var/tmp/spdk.sock 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58487 ']' 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.178 09:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.178 [2024-10-30 09:40:22.421673] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:44.178 [2024-10-30 09:40:22.421797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:05:44.178 [2024-10-30 09:40:22.582802] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.178 [2024-10-30 09:40:22.582858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.178 [2024-10-30 09:40:22.691801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.178 [2024-10-30 09:40:22.692154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.178 [2024-10-30 09:40:22.692321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58504 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58504 /var/tmp/spdk2.sock 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58504 ']' 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.769 09:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.769 [2024-10-30 09:40:23.373353] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:44.769 [2024-10-30 09:40:23.373647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58504 ] 00:05:45.029 [2024-10-30 09:40:23.548200] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.029 [2024-10-30 09:40:23.548271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.289 [2024-10-30 09:40:23.781049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.289 [2024-10-30 09:40:23.784534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.289 [2024-10-30 09:40:23.784546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.676 [2024-10-30 09:40:25.026226] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58487 has claimed it. 00:05:46.676 request: 00:05:46.676 { 00:05:46.676 "method": "framework_enable_cpumask_locks", 00:05:46.676 "req_id": 1 00:05:46.676 } 00:05:46.676 Got JSON-RPC error response 00:05:46.676 response: 00:05:46.676 { 00:05:46.676 "code": -32603, 00:05:46.676 "message": "Failed to claim CPU core: 2" 00:05:46.676 } 00:05:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58487 /var/tmp/spdk.sock 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58487 ']' 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58504 /var/tmp/spdk2.sock 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58504 ']' 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.676 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.937 ************************************ 00:05:46.937 END TEST locking_overlapped_coremask_via_rpc 00:05:46.937 ************************************ 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.937 00:05:46.937 real 0m3.113s 00:05:46.937 user 0m1.103s 00:05:46.937 sys 0m0.127s 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.937 09:40:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.937 09:40:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.937 09:40:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58487 ]] 00:05:46.937 09:40:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58487 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58487 ']' 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58487 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58487 00:05:46.937 killing process with pid 58487 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58487' 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58487 00:05:46.937 09:40:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58487 00:05:48.846 09:40:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58504 ]] 00:05:48.847 09:40:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58504 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58504 ']' 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58504 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58504 00:05:48.847 killing process with pid 58504 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58504' 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58504 00:05:48.847 09:40:27 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58504 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.229 Process with pid 58487 is not found 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58487 ]] 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58487 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58487 ']' 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58487 00:05:50.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58487) - No such process 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58487 is not found' 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58504 ]] 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58504 00:05:50.229 Process with pid 58504 is not found 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58504 ']' 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58504 00:05:50.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58504) - No such process 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58504 is not found' 00:05:50.229 09:40:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.229 00:05:50.229 real 0m33.599s 00:05:50.229 user 0m57.503s 00:05:50.229 sys 0m4.573s 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.229 ************************************ 00:05:50.229 09:40:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.229 END TEST cpu_locks 00:05:50.229 ************************************ 00:05:50.229 00:05:50.229 real 1m2.072s 00:05:50.229 user 1m53.752s 00:05:50.229 sys 0m7.563s 00:05:50.229 09:40:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.229 09:40:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.229 ************************************ 00:05:50.229 END TEST event 00:05:50.229 ************************************ 00:05:50.229 09:40:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.229 09:40:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.229 09:40:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.229 09:40:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.229 ************************************ 00:05:50.229 START TEST thread 00:05:50.229 ************************************ 00:05:50.229 09:40:28 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.229 * Looking for test storage... 00:05:50.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:50.229 09:40:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.229 09:40:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.229 09:40:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.229 09:40:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.229 09:40:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.229 09:40:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.229 09:40:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.229 09:40:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.229 09:40:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.229 09:40:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.229 09:40:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.229 09:40:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.229 09:40:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.229 09:40:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.229 09:40:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.229 09:40:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:50.229 09:40:28 thread -- scripts/common.sh@345 -- # : 1 00:05:50.229 09:40:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.229 09:40:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.229 09:40:28 thread -- scripts/common.sh@365 -- # decimal 1 00:05:50.229 09:40:28 thread -- scripts/common.sh@353 -- # local d=1 00:05:50.229 09:40:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.229 09:40:28 thread -- scripts/common.sh@355 -- # echo 1 00:05:50.229 09:40:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.229 09:40:28 thread -- scripts/common.sh@366 -- # decimal 2 00:05:50.229 09:40:28 thread -- scripts/common.sh@353 -- # local d=2 00:05:50.229 09:40:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.229 09:40:28 thread -- scripts/common.sh@355 -- # echo 2 00:05:50.489 09:40:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.489 09:40:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.489 09:40:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.489 09:40:28 thread -- scripts/common.sh@368 -- # return 0 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.489 --rc genhtml_branch_coverage=1 00:05:50.489 --rc genhtml_function_coverage=1 00:05:50.489 --rc genhtml_legend=1 00:05:50.489 --rc geninfo_all_blocks=1 00:05:50.489 --rc geninfo_unexecuted_blocks=1 00:05:50.489 00:05:50.489 ' 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.489 --rc genhtml_branch_coverage=1 00:05:50.489 --rc genhtml_function_coverage=1 00:05:50.489 --rc genhtml_legend=1 00:05:50.489 --rc geninfo_all_blocks=1 00:05:50.489 --rc geninfo_unexecuted_blocks=1 00:05:50.489 00:05:50.489 ' 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.489 --rc genhtml_branch_coverage=1 00:05:50.489 --rc genhtml_function_coverage=1 00:05:50.489 --rc genhtml_legend=1 00:05:50.489 --rc geninfo_all_blocks=1 00:05:50.489 --rc geninfo_unexecuted_blocks=1 00:05:50.489 00:05:50.489 ' 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.489 --rc genhtml_branch_coverage=1 00:05:50.489 --rc genhtml_function_coverage=1 00:05:50.489 --rc genhtml_legend=1 00:05:50.489 --rc geninfo_all_blocks=1 00:05:50.489 --rc geninfo_unexecuted_blocks=1 00:05:50.489 00:05:50.489 ' 00:05:50.489 09:40:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.489 09:40:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.489 ************************************ 00:05:50.489 START TEST thread_poller_perf 00:05:50.489 ************************************ 00:05:50.489 09:40:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.489 [2024-10-30 09:40:28.893381] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:50.489 [2024-10-30 09:40:28.893633] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58665 ] 00:05:50.489 [2024-10-30 09:40:29.051975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.749 [2024-10-30 09:40:29.155443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.749 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.132 [2024-10-30T09:40:30.752Z] ====================================== 00:05:52.132 [2024-10-30T09:40:30.752Z] busy:2616367428 (cyc) 00:05:52.132 [2024-10-30T09:40:30.752Z] total_run_count: 305000 00:05:52.132 [2024-10-30T09:40:30.752Z] tsc_hz: 2600000000 (cyc) 00:05:52.132 [2024-10-30T09:40:30.752Z] ====================================== 00:05:52.132 [2024-10-30T09:40:30.752Z] poller_cost: 8578 (cyc), 3299 (nsec) 00:05:52.132 00:05:52.132 real 0m1.463s 00:05:52.132 user 0m1.279s 00:05:52.132 sys 0m0.076s 00:05:52.132 09:40:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.132 09:40:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.132 ************************************ 00:05:52.132 END TEST thread_poller_perf 00:05:52.132 ************************************ 00:05:52.132 09:40:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.132 09:40:30 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:52.132 09:40:30 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.132 09:40:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.132 ************************************ 00:05:52.132 START TEST thread_poller_perf 00:05:52.132 ************************************ 00:05:52.132 09:40:30 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.132 [2024-10-30 09:40:30.421179] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:52.132 [2024-10-30 09:40:30.421288] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58701 ] 00:05:52.132 [2024-10-30 09:40:30.603119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.132 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:52.132 [2024-10-30 09:40:30.705761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.520 [2024-10-30T09:40:32.140Z] ====================================== 00:05:53.520 [2024-10-30T09:40:32.140Z] busy:2603142146 (cyc) 00:05:53.520 [2024-10-30T09:40:32.140Z] total_run_count: 3935000 00:05:53.520 [2024-10-30T09:40:32.140Z] tsc_hz: 2600000000 (cyc) 00:05:53.520 [2024-10-30T09:40:32.140Z] ====================================== 00:05:53.520 [2024-10-30T09:40:32.140Z] poller_cost: 661 (cyc), 254 (nsec) 00:05:53.520 00:05:53.520 real 0m1.475s 00:05:53.520 user 0m1.291s 00:05:53.520 sys 0m0.075s 00:05:53.520 09:40:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.520 ************************************ 00:05:53.520 END TEST thread_poller_perf 00:05:53.520 ************************************ 00:05:53.520 09:40:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.520 09:40:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.520 ************************************ 00:05:53.520 END TEST thread 00:05:53.520 ************************************ 00:05:53.520 00:05:53.520 real 0m3.219s 00:05:53.520 user 0m2.693s 00:05:53.520 sys 0m0.261s 00:05:53.520 09:40:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.520 09:40:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.520 09:40:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:53.520 09:40:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.520 09:40:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.520 09:40:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.520 09:40:31 -- common/autotest_common.sh@10 -- # set +x 00:05:53.520 ************************************ 00:05:53.520 START TEST app_cmdline 00:05:53.520 ************************************ 00:05:53.520 09:40:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.520 * Looking for test storage... 00:05:53.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:53.520 09:40:32 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.520 09:40:32 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.520 09:40:32 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.520 09:40:32 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.520 09:40:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:53.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.783 09:40:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.783 09:40:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.783 09:40:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.783 09:40:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.783 --rc genhtml_branch_coverage=1 00:05:53.783 --rc genhtml_function_coverage=1 00:05:53.783 --rc genhtml_legend=1 00:05:53.783 --rc geninfo_all_blocks=1 00:05:53.783 --rc geninfo_unexecuted_blocks=1 00:05:53.783 00:05:53.783 ' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.783 --rc genhtml_branch_coverage=1 00:05:53.783 --rc genhtml_function_coverage=1 00:05:53.783 --rc genhtml_legend=1 00:05:53.783 --rc geninfo_all_blocks=1 00:05:53.783 --rc geninfo_unexecuted_blocks=1 00:05:53.783 00:05:53.783 ' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.783 --rc genhtml_branch_coverage=1 00:05:53.783 --rc genhtml_function_coverage=1 00:05:53.783 --rc genhtml_legend=1 00:05:53.783 --rc geninfo_all_blocks=1 00:05:53.783 --rc geninfo_unexecuted_blocks=1 00:05:53.783 00:05:53.783 ' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.783 --rc genhtml_branch_coverage=1 00:05:53.783 --rc genhtml_function_coverage=1 00:05:53.783 --rc genhtml_legend=1 00:05:53.783 --rc geninfo_all_blocks=1 00:05:53.783 --rc geninfo_unexecuted_blocks=1 00:05:53.783 00:05:53.783 ' 00:05:53.783 09:40:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:53.783 09:40:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58785 00:05:53.783 09:40:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58785 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 58785 ']' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.783 09:40:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.783 09:40:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.783 [2024-10-30 09:40:32.219936] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:53.783 [2024-10-30 09:40:32.220051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58785 ] 00:05:53.783 [2024-10-30 09:40:32.379875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.045 [2024-10-30 09:40:32.481178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.616 09:40:33 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.616 09:40:33 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:54.616 09:40:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:54.878 { 00:05:54.878 "version": "SPDK v25.01-pre git sha1 bfbfb6d81", 00:05:54.878 "fields": { 00:05:54.878 "major": 25, 00:05:54.878 "minor": 1, 00:05:54.878 "patch": 0, 00:05:54.878 "suffix": "-pre", 00:05:54.878 "commit": "bfbfb6d81" 00:05:54.878 } 00:05:54.878 } 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:54.878 09:40:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:54.878 09:40:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.139 request: 00:05:55.139 { 00:05:55.139 "method": "env_dpdk_get_mem_stats", 00:05:55.139 "req_id": 1 00:05:55.139 } 00:05:55.140 Got JSON-RPC error response 00:05:55.140 response: 00:05:55.140 { 00:05:55.140 "code": -32601, 00:05:55.140 "message": "Method not found" 00:05:55.140 } 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.140 09:40:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58785 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 58785 ']' 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 58785 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58785 00:05:55.140 killing process with pid 58785 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58785' 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@971 -- # kill 58785 00:05:55.140 09:40:33 app_cmdline -- common/autotest_common.sh@976 -- # wait 58785 00:05:56.527 ************************************ 00:05:56.527 END TEST app_cmdline 00:05:56.527 ************************************ 00:05:56.527 00:05:56.527 real 0m3.073s 00:05:56.527 user 0m3.413s 00:05:56.527 sys 0m0.440s 00:05:56.527 09:40:35 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.527 09:40:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.527 09:40:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:56.527 09:40:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.527 09:40:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.527 09:40:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.527 ************************************ 00:05:56.527 START TEST version 00:05:56.527 ************************************ 00:05:56.527 09:40:35 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:56.790 * Looking for test storage... 00:05:56.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.790 09:40:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.790 09:40:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.790 09:40:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.790 09:40:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.790 09:40:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.790 09:40:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.790 09:40:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.790 09:40:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.790 09:40:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.790 09:40:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.790 09:40:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.790 09:40:35 version -- scripts/common.sh@344 -- # case "$op" in 00:05:56.790 09:40:35 version -- scripts/common.sh@345 -- # : 1 00:05:56.790 09:40:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.790 09:40:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.790 09:40:35 version -- scripts/common.sh@365 -- # decimal 1 00:05:56.790 09:40:35 version -- scripts/common.sh@353 -- # local d=1 00:05:56.790 09:40:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.790 09:40:35 version -- scripts/common.sh@355 -- # echo 1 00:05:56.790 09:40:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.790 09:40:35 version -- scripts/common.sh@366 -- # decimal 2 00:05:56.790 09:40:35 version -- scripts/common.sh@353 -- # local d=2 00:05:56.790 09:40:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.790 09:40:35 version -- scripts/common.sh@355 -- # echo 2 00:05:56.790 09:40:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.790 09:40:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.790 09:40:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.790 09:40:35 version -- scripts/common.sh@368 -- # return 0 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.790 --rc genhtml_branch_coverage=1 00:05:56.790 --rc genhtml_function_coverage=1 00:05:56.790 --rc genhtml_legend=1 00:05:56.790 --rc geninfo_all_blocks=1 00:05:56.790 --rc geninfo_unexecuted_blocks=1 00:05:56.790 00:05:56.790 ' 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.790 --rc genhtml_branch_coverage=1 00:05:56.790 --rc genhtml_function_coverage=1 00:05:56.790 --rc genhtml_legend=1 00:05:56.790 --rc geninfo_all_blocks=1 00:05:56.790 --rc geninfo_unexecuted_blocks=1 00:05:56.790 00:05:56.790 ' 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.790 --rc genhtml_branch_coverage=1 00:05:56.790 --rc genhtml_function_coverage=1 00:05:56.790 --rc genhtml_legend=1 00:05:56.790 --rc geninfo_all_blocks=1 00:05:56.790 --rc geninfo_unexecuted_blocks=1 00:05:56.790 00:05:56.790 ' 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.790 --rc genhtml_branch_coverage=1 00:05:56.790 --rc genhtml_function_coverage=1 00:05:56.790 --rc genhtml_legend=1 00:05:56.790 --rc geninfo_all_blocks=1 00:05:56.790 --rc geninfo_unexecuted_blocks=1 00:05:56.790 00:05:56.790 ' 00:05:56.790 09:40:35 version -- app/version.sh@17 -- # get_header_version major 00:05:56.790 09:40:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # cut -f2 00:05:56.790 09:40:35 version -- app/version.sh@17 -- # major=25 00:05:56.790 09:40:35 version -- app/version.sh@18 -- # get_header_version minor 00:05:56.790 09:40:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # cut -f2 00:05:56.790 09:40:35 version -- app/version.sh@18 -- # minor=1 00:05:56.790 09:40:35 version -- app/version.sh@19 -- # get_header_version patch 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # cut -f2 00:05:56.790 09:40:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.790 09:40:35 version -- app/version.sh@19 -- # patch=0 00:05:56.790 09:40:35 version -- app/version.sh@20 -- # get_header_version suffix 00:05:56.790 09:40:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # cut -f2 00:05:56.790 09:40:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.790 09:40:35 version -- app/version.sh@20 -- # suffix=-pre 00:05:56.790 09:40:35 version -- app/version.sh@22 -- # version=25.1 00:05:56.790 09:40:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:56.790 09:40:35 version -- app/version.sh@28 -- # version=25.1rc0 00:05:56.790 09:40:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:56.790 09:40:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:56.790 09:40:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:56.790 09:40:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:56.790 ************************************ 00:05:56.790 END TEST version 00:05:56.790 ************************************ 00:05:56.790 00:05:56.790 real 0m0.205s 00:05:56.790 user 0m0.121s 00:05:56.790 sys 0m0.106s 00:05:56.790 09:40:35 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.790 09:40:35 version -- common/autotest_common.sh@10 -- # set +x 00:05:56.790 09:40:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:56.790 09:40:35 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:05:56.790 09:40:35 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:56.790 09:40:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.790 09:40:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.790 09:40:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.790 ************************************ 00:05:56.790 START TEST bdev_raid 00:05:56.790 ************************************ 00:05:56.790 09:40:35 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:57.053 * Looking for test storage... 00:05:57.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@345 -- # : 1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.053 09:40:35 bdev_raid -- scripts/common.sh@368 -- # return 0 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.053 --rc genhtml_branch_coverage=1 00:05:57.053 --rc genhtml_function_coverage=1 00:05:57.053 --rc genhtml_legend=1 00:05:57.053 --rc geninfo_all_blocks=1 00:05:57.053 --rc geninfo_unexecuted_blocks=1 00:05:57.053 00:05:57.053 ' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.053 --rc genhtml_branch_coverage=1 00:05:57.053 --rc genhtml_function_coverage=1 00:05:57.053 --rc genhtml_legend=1 00:05:57.053 --rc geninfo_all_blocks=1 00:05:57.053 --rc geninfo_unexecuted_blocks=1 00:05:57.053 00:05:57.053 ' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.053 --rc genhtml_branch_coverage=1 00:05:57.053 --rc genhtml_function_coverage=1 00:05:57.053 --rc genhtml_legend=1 00:05:57.053 --rc geninfo_all_blocks=1 00:05:57.053 --rc geninfo_unexecuted_blocks=1 00:05:57.053 00:05:57.053 ' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.053 --rc genhtml_branch_coverage=1 00:05:57.053 --rc genhtml_function_coverage=1 00:05:57.053 --rc genhtml_legend=1 00:05:57.053 --rc geninfo_all_blocks=1 00:05:57.053 --rc geninfo_unexecuted_blocks=1 00:05:57.053 00:05:57.053 ' 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.053 09:40:35 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:05:57.053 09:40:35 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.053 09:40:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 ************************************ 00:05:57.053 START TEST raid1_resize_data_offset_test 00:05:57.053 ************************************ 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58961 00:05:57.053 Process raid pid: 58961 00:05:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58961' 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58961 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 58961 ']' 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.053 09:40:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 [2024-10-30 09:40:35.614345] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:05:57.053 [2024-10-30 09:40:35.614569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.314 [2024-10-30 09:40:35.778368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.314 [2024-10-30 09:40:35.880731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.575 [2024-10-30 09:40:36.020643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:57.575 [2024-10-30 09:40:36.020841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.147 malloc0 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.147 malloc1 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.147 null0 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.147 [2024-10-30 09:40:36.614104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:05:58.147 [2024-10-30 09:40:36.615902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:58.147 [2024-10-30 09:40:36.615945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:05:58.147 [2024-10-30 09:40:36.616210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:58.147 [2024-10-30 09:40:36.616252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:05:58.147 [2024-10-30 09:40:36.616542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:58.147 [2024-10-30 09:40:36.616710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:58.147 [2024-10-30 09:40:36.616796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:58.147 [2024-10-30 09:40:36.616979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.147 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.148 [2024-10-30 09:40:36.658141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.148 09:40:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.721 malloc2 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.721 [2024-10-30 09:40:37.049986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:58.721 [2024-10-30 09:40:37.061735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.721 [2024-10-30 09:40:37.063560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58961 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 58961 ']' 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 58961 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58961 00:05:58.721 killing process with pid 58961 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58961' 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 58961 00:05:58.721 [2024-10-30 09:40:37.127817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:58.721 09:40:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 58961 00:05:58.721 [2024-10-30 09:40:37.128471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:05:58.721 [2024-10-30 09:40:37.128649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:58.721 [2024-10-30 09:40:37.128670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:05:58.721 [2024-10-30 09:40:37.151714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:58.721 [2024-10-30 09:40:37.152202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:58.721 [2024-10-30 09:40:37.152227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:59.663 [2024-10-30 09:40:38.260294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:00.607 ************************************ 00:06:00.607 END TEST raid1_resize_data_offset_test 00:06:00.607 ************************************ 00:06:00.607 09:40:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:00.607 00:06:00.607 real 0m3.421s 00:06:00.607 user 0m3.416s 00:06:00.607 sys 0m0.403s 00:06:00.607 09:40:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.608 09:40:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 09:40:39 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:00.608 09:40:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:00.608 09:40:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.608 09:40:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 ************************************ 00:06:00.608 START TEST raid0_resize_superblock_test 00:06:00.608 ************************************ 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:06:00.608 Process raid pid: 59034 00:06:00.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59034 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59034' 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59034 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59034 ']' 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.608 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:00.608 [2024-10-30 09:40:39.097539] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:00.608 [2024-10-30 09:40:39.097658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.869 [2024-10-30 09:40:39.256303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.869 [2024-10-30 09:40:39.355613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.131 [2024-10-30 09:40:39.493590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:01.131 [2024-10-30 09:40:39.493767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:01.393 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.393 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:01.393 09:40:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:01.393 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.393 09:40:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 malloc0 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 [2024-10-30 09:40:40.335913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:01.964 [2024-10-30 09:40:40.335978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.964 [2024-10-30 09:40:40.335999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:01.964 [2024-10-30 09:40:40.336011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.964 [2024-10-30 09:40:40.338216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.964 [2024-10-30 09:40:40.338249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:01.964 pt0 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 96bfdc6a-ae26-4ee2-9c39-22bbfdac161a 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 a058b4ef-0736-4391-9c99-b55b918a8657 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 fc3f5195-adcd-46ec-b454-f09f829c984a 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 [2024-10-30 09:40:40.431511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a058b4ef-0736-4391-9c99-b55b918a8657 is claimed 00:06:01.964 [2024-10-30 09:40:40.431600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc3f5195-adcd-46ec-b454-f09f829c984a is claimed 00:06:01.964 [2024-10-30 09:40:40.431730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:01.964 [2024-10-30 09:40:40.431745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:01.964 [2024-10-30 09:40:40.432010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:01.964 [2024-10-30 09:40:40.432191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:01.964 [2024-10-30 09:40:40.432201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:01.964 [2024-10-30 09:40:40.432345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 [2024-10-30 09:40:40.511775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 [2024-10-30 09:40:40.543718] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:01.965 [2024-10-30 09:40:40.543837] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a058b4ef-0736-4391-9c99-b55b918a8657' was resized: old size 131072, new size 204800 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 [2024-10-30 09:40:40.551630] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:01.965 [2024-10-30 09:40:40.551725] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fc3f5195-adcd-46ec-b454-f09f829c984a' was resized: old size 131072, new size 204800 00:06:01.965 [2024-10-30 09:40:40.551827] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.226 [2024-10-30 09:40:40.631798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.226 [2024-10-30 09:40:40.663559] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:02.226 [2024-10-30 09:40:40.663626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:02.226 [2024-10-30 09:40:40.663637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:02.226 [2024-10-30 09:40:40.663654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:02.226 [2024-10-30 09:40:40.663757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:02.226 [2024-10-30 09:40:40.663799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:02.226 [2024-10-30 09:40:40.663823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.226 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.226 [2024-10-30 09:40:40.671503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:02.226 [2024-10-30 09:40:40.671554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.226 [2024-10-30 09:40:40.671573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:02.226 [2024-10-30 09:40:40.671584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.226 [2024-10-30 09:40:40.673831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.226 [2024-10-30 09:40:40.673956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:02.226 pt0 00:06:02.226 [2024-10-30 09:40:40.675597] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a058b4ef-0736-4391-9c99-b55b918a8657 00:06:02.226 [2024-10-30 09:40:40.675650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a058b4ef-0736-4391-9c99-b55b918a8657 is claimed 00:06:02.226 [2024-10-30 09:40:40.675748] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fc3f5195-adcd-46ec-b454-f09f829c984a 00:06:02.226 [2024-10-30 09:40:40.675765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc3f5195-adcd-46ec-b454-f09f829c984a is claimed 00:06:02.226 [2024-10-30 09:40:40.675930] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fc3f5195-adcd-46ec-b454-f09f829c984a (2) smaller than existing raid bdev Raid (3) 00:06:02.226 [2024-10-30 09:40:40.675952] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a058b4ef-0736-4391-9c99-b55b918a8657: File exists 00:06:02.226 [2024-10-30 09:40:40.675990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:02.226 [2024-10-30 09:40:40.676000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:02.226 [2024-10-30 09:40:40.676254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:02.226 [2024-10-30 09:40:40.676394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:02.226 [2024-10-30 09:40:40.676402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:02.227 [2024-10-30 09:40:40.676545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 [2024-10-30 09:40:40.691953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59034 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59034 ']' 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59034 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59034 00:06:02.227 killing process with pid 59034 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59034' 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59034 00:06:02.227 [2024-10-30 09:40:40.748106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:02.227 [2024-10-30 09:40:40.748172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:02.227 09:40:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59034 00:06:02.227 [2024-10-30 09:40:40.748216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:02.227 [2024-10-30 09:40:40.748225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:03.169 [2024-10-30 09:40:41.646728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:03.781 ************************************ 00:06:03.781 END TEST raid0_resize_superblock_test 00:06:03.781 ************************************ 00:06:03.781 09:40:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:03.781 00:06:03.781 real 0m3.334s 00:06:03.781 user 0m3.518s 00:06:03.781 sys 0m0.426s 00:06:03.781 09:40:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.781 09:40:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.054 09:40:42 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:04.054 09:40:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:04.054 09:40:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.054 09:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:04.054 ************************************ 00:06:04.054 START TEST raid1_resize_superblock_test 00:06:04.054 ************************************ 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:04.054 Process raid pid: 59116 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59116 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59116' 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59116 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59116 ']' 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.054 09:40:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.054 [2024-10-30 09:40:42.494158] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:04.054 [2024-10-30 09:40:42.494448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.054 [2024-10-30 09:40:42.657966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.316 [2024-10-30 09:40:42.762467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.316 [2024-10-30 09:40:42.903803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:04.316 [2024-10-30 09:40:42.903857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:04.889 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.889 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:04.889 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:04.889 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.889 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.151 malloc0 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.151 [2024-10-30 09:40:43.763500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:05.151 [2024-10-30 09:40:43.763557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.151 [2024-10-30 09:40:43.763578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:05.151 [2024-10-30 09:40:43.763590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.151 [2024-10-30 09:40:43.765768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.151 [2024-10-30 09:40:43.765805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:05.151 pt0 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.151 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 ba4ed4c5-ea4e-4271-ae00-370515e7bdf1 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 c5df3193-0205-4590-8a49-2643533ee92e 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 18c941fd-38cc-4fe7-b8f1-1275a792831e 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 [2024-10-30 09:40:43.851463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5df3193-0205-4590-8a49-2643533ee92e is claimed 00:06:05.414 [2024-10-30 09:40:43.851543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 18c941fd-38cc-4fe7-b8f1-1275a792831e is claimed 00:06:05.414 [2024-10-30 09:40:43.851678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:05.414 [2024-10-30 09:40:43.851692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:05.414 [2024-10-30 09:40:43.851951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:05.414 [2024-10-30 09:40:43.852139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:05.414 [2024-10-30 09:40:43.852149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:05.414 [2024-10-30 09:40:43.852288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:05.414 [2024-10-30 09:40:43.923708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 [2024-10-30 09:40:43.951639] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:05.414 [2024-10-30 09:40:43.951761] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c5df3193-0205-4590-8a49-2643533ee92e' was resized: old size 131072, new size 204800 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 [2024-10-30 09:40:43.959581] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:05.414 [2024-10-30 09:40:43.959600] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '18c941fd-38cc-4fe7-b8f1-1275a792831e' was resized: old size 131072, new size 204800 00:06:05.414 [2024-10-30 09:40:43.959625] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 09:40:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:05.414 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:05.677 [2024-10-30 09:40:44.035739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 [2024-10-30 09:40:44.067508] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:05.677 [2024-10-30 09:40:44.067581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:05.677 [2024-10-30 09:40:44.067607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:05.677 [2024-10-30 09:40:44.067752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:05.677 [2024-10-30 09:40:44.067953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:05.677 [2024-10-30 09:40:44.068020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:05.677 [2024-10-30 09:40:44.068037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 [2024-10-30 09:40:44.075449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:05.677 [2024-10-30 09:40:44.075496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.677 [2024-10-30 09:40:44.075512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:05.677 [2024-10-30 09:40:44.075524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.677 [2024-10-30 09:40:44.077690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.677 [2024-10-30 09:40:44.077726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:05.677 [2024-10-30 09:40:44.079308] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c5df3193-0205-4590-8a49-2643533ee92e 00:06:05.677 [2024-10-30 09:40:44.079368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5df3193-0205-4590-8a49-2643533ee92e is claimed 00:06:05.677 [2024-10-30 09:40:44.079467] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 18c941fd-38cc-4fe7-b8f1-1275a792831e 00:06:05.677 [2024-10-30 09:40:44.079484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 18c941fd-38cc-4fe7-b8f1-1275a792831e is claimed 00:06:05.677 [2024-10-30 09:40:44.079593] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 18c941fd-38cc-4fe7-b8f1-1275a792831e (2) smaller than existing raid bdev Raid (3) 00:06:05.677 [2024-10-30 09:40:44.079611] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c5df3193-0205-4590-8a49-2643533ee92e: File exists 00:06:05.677 [2024-10-30 09:40:44.079652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:05.677 [2024-10-30 09:40:44.079662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:05.677 [2024-10-30 09:40:44.079911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:05.677 [2024-10-30 09:40:44.080052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:05.677 [2024-10-30 09:40:44.080074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:05.677 [2024-10-30 09:40:44.080243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:05.677 pt0 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:05.677 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.678 [2024-10-30 09:40:44.095929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59116 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59116 ']' 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59116 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59116 00:06:05.678 killing process with pid 59116 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59116' 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59116 00:06:05.678 [2024-10-30 09:40:44.146067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:05.678 09:40:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59116 00:06:05.678 [2024-10-30 09:40:44.146126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:05.678 [2024-10-30 09:40:44.146173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:05.678 [2024-10-30 09:40:44.146182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:06.617 [2024-10-30 09:40:45.026962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:07.252 ************************************ 00:06:07.252 END TEST raid1_resize_superblock_test 00:06:07.252 ************************************ 00:06:07.252 09:40:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:07.252 00:06:07.252 real 0m3.300s 00:06:07.252 user 0m3.503s 00:06:07.252 sys 0m0.430s 00:06:07.252 09:40:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.252 09:40:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:07.252 09:40:45 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:07.252 09:40:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:07.252 09:40:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.252 09:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:07.252 ************************************ 00:06:07.252 START TEST raid_function_test_raid0 00:06:07.252 ************************************ 00:06:07.252 Process raid pid: 59208 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59208 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59208' 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59208 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 59208 ']' 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:07.252 09:40:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:07.515 [2024-10-30 09:40:45.881854] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:07.515 [2024-10-30 09:40:45.881968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.515 [2024-10-30 09:40:46.039841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.809 [2024-10-30 09:40:46.141376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.809 [2024-10-30 09:40:46.281129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:07.810 [2024-10-30 09:40:46.281175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:08.380 Base_1 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.380 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:08.381 Base_2 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:08.381 [2024-10-30 09:40:46.784994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:08.381 [2024-10-30 09:40:46.786842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:08.381 [2024-10-30 09:40:46.787031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:08.381 [2024-10-30 09:40:46.787051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:08.381 [2024-10-30 09:40:46.787333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:08.381 [2024-10-30 09:40:46.787462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:08.381 [2024-10-30 09:40:46.787471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:08.381 [2024-10-30 09:40:46.787607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:08.381 09:40:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:08.641 [2024-10-30 09:40:47.005104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:08.641 /dev/nbd0 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.641 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.642 1+0 records in 00:06:08.642 1+0 records out 00:06:08.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271469 s, 15.1 MB/s 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:08.642 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:08.901 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.901 { 00:06:08.901 "nbd_device": "/dev/nbd0", 00:06:08.901 "bdev_name": "raid" 00:06:08.901 } 00:06:08.901 ]' 00:06:08.901 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.901 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.901 { 00:06:08.901 "nbd_device": "/dev/nbd0", 00:06:08.902 "bdev_name": "raid" 00:06:08.902 } 00:06:08.902 ]' 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:08.902 4096+0 records in 00:06:08.902 4096+0 records out 00:06:08.902 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0211161 s, 99.3 MB/s 00:06:08.902 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:09.163 4096+0 records in 00:06:09.163 4096+0 records out 00:06:09.163 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212791 s, 9.9 MB/s 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:09.163 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:09.164 128+0 records in 00:06:09.164 128+0 records out 00:06:09.164 65536 bytes (66 kB, 64 KiB) copied, 0.000763698 s, 85.8 MB/s 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:09.164 2035+0 records in 00:06:09.164 2035+0 records out 00:06:09.164 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00685971 s, 152 MB/s 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:09.164 456+0 records in 00:06:09.164 456+0 records out 00:06:09.164 233472 bytes (233 kB, 228 KiB) copied, 0.00142035 s, 164 MB/s 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.164 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:09.429 [2024-10-30 09:40:47.858007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:09.429 09:40:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59208 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 59208 ']' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 59208 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59208 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.725 killing process with pid 59208 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59208' 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 59208 00:06:09.725 [2024-10-30 09:40:48.150576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:09.725 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 59208 00:06:09.725 [2024-10-30 09:40:48.150672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:09.725 [2024-10-30 09:40:48.150722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:09.725 [2024-10-30 09:40:48.150736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:09.725 [2024-10-30 09:40:48.279151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:10.670 09:40:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:10.670 00:06:10.670 real 0m3.171s 00:06:10.670 user 0m3.843s 00:06:10.670 sys 0m0.717s 00:06:10.670 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.670 09:40:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:10.670 ************************************ 00:06:10.670 END TEST raid_function_test_raid0 00:06:10.670 ************************************ 00:06:10.671 09:40:49 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:10.671 09:40:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:10.671 09:40:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.671 09:40:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:10.671 ************************************ 00:06:10.671 START TEST raid_function_test_concat 00:06:10.671 ************************************ 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59331 00:06:10.671 Process raid pid: 59331 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59331' 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59331 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 59331 ']' 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:10.671 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:10.671 [2024-10-30 09:40:49.109028] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:10.671 [2024-10-30 09:40:49.109156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.671 [2024-10-30 09:40:49.269171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.931 [2024-10-30 09:40:49.372925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.931 [2024-10-30 09:40:49.511552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:10.931 [2024-10-30 09:40:49.511592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:11.501 Base_1 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.501 09:40:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:11.501 Base_2 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:11.501 [2024-10-30 09:40:50.015002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:11.501 [2024-10-30 09:40:50.016866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:11.501 [2024-10-30 09:40:50.016938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:11.501 [2024-10-30 09:40:50.016951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:11.501 [2024-10-30 09:40:50.017234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:11.501 [2024-10-30 09:40:50.017366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:11.501 [2024-10-30 09:40:50.017381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:11.501 [2024-10-30 09:40:50.017519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.501 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:11.502 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:11.762 [2024-10-30 09:40:50.251127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:11.762 /dev/nbd0 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.762 1+0 records in 00:06:11.762 1+0 records out 00:06:11.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578443 s, 7.1 MB/s 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:11.762 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.023 { 00:06:12.023 "nbd_device": "/dev/nbd0", 00:06:12.023 "bdev_name": "raid" 00:06:12.023 } 00:06:12.023 ]' 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.023 { 00:06:12.023 "nbd_device": "/dev/nbd0", 00:06:12.023 "bdev_name": "raid" 00:06:12.023 } 00:06:12.023 ]' 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:12.023 4096+0 records in 00:06:12.023 4096+0 records out 00:06:12.023 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0212763 s, 98.6 MB/s 00:06:12.023 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:12.609 4096+0 records in 00:06:12.609 4096+0 records out 00:06:12.609 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.403462 s, 5.2 MB/s 00:06:12.609 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:12.609 09:40:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:12.609 128+0 records in 00:06:12.609 128+0 records out 00:06:12.609 65536 bytes (66 kB, 64 KiB) copied, 0.00059684 s, 110 MB/s 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:12.609 2035+0 records in 00:06:12.609 2035+0 records out 00:06:12.609 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00631132 s, 165 MB/s 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:12.609 456+0 records in 00:06:12.609 456+0 records out 00:06:12.609 233472 bytes (233 kB, 228 KiB) copied, 0.00167926 s, 139 MB/s 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.609 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.946 [2024-10-30 09:40:51.290372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59331 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 59331 ']' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 59331 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:12.946 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59331 00:06:13.208 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.208 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.208 killing process with pid 59331 00:06:13.208 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59331' 00:06:13.208 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 59331 00:06:13.208 [2024-10-30 09:40:51.568487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:13.208 09:40:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 59331 00:06:13.208 [2024-10-30 09:40:51.568581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:13.208 [2024-10-30 09:40:51.568636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:13.208 [2024-10-30 09:40:51.568647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:13.208 [2024-10-30 09:40:51.695094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:13.782 09:40:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:13.782 00:06:13.782 real 0m3.352s 00:06:13.782 user 0m3.957s 00:06:13.782 sys 0m0.782s 00:06:13.782 09:40:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.782 09:40:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:13.782 ************************************ 00:06:13.782 END TEST raid_function_test_concat 00:06:13.782 ************************************ 00:06:14.068 09:40:52 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:14.068 09:40:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:14.068 09:40:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.068 09:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:14.068 ************************************ 00:06:14.068 START TEST raid0_resize_test 00:06:14.068 ************************************ 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:14.068 Process raid pid: 59448 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59448 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59448' 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59448 00:06:14.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59448 ']' 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.068 09:40:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.068 [2024-10-30 09:40:52.526306] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:14.068 [2024-10-30 09:40:52.526421] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.068 [2024-10-30 09:40:52.681243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.329 [2024-10-30 09:40:52.784182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.329 [2024-10-30 09:40:52.923121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:14.329 [2024-10-30 09:40:52.923160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 Base_1 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 Base_2 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 [2024-10-30 09:40:53.394967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:14.900 [2024-10-30 09:40:53.396804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:14.900 [2024-10-30 09:40:53.396862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:14.900 [2024-10-30 09:40:53.396874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:14.900 [2024-10-30 09:40:53.397134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:14.900 [2024-10-30 09:40:53.397242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:14.900 [2024-10-30 09:40:53.397251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:14.900 [2024-10-30 09:40:53.397390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 [2024-10-30 09:40:53.402949] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:14.900 [2024-10-30 09:40:53.402979] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:14.900 true 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:14.900 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.901 [2024-10-30 09:40:53.415143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.901 [2024-10-30 09:40:53.446944] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:14.901 [2024-10-30 09:40:53.446971] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:14.901 [2024-10-30 09:40:53.446995] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:14.901 true 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:14.901 [2024-10-30 09:40:53.459152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59448 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59448 ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 59448 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59448 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.901 killing process with pid 59448 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59448' 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 59448 00:06:14.901 [2024-10-30 09:40:53.513829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:14.901 09:40:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 59448 00:06:14.901 [2024-10-30 09:40:53.513903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:14.901 [2024-10-30 09:40:53.513949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:14.901 [2024-10-30 09:40:53.513959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:15.161 [2024-10-30 09:40:53.525138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:15.733 ************************************ 00:06:15.733 END TEST raid0_resize_test 00:06:15.733 ************************************ 00:06:15.733 09:40:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:15.733 00:06:15.733 real 0m1.764s 00:06:15.733 user 0m1.961s 00:06:15.733 sys 0m0.207s 00:06:15.733 09:40:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.733 09:40:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.733 09:40:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:15.733 09:40:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:15.733 09:40:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.733 09:40:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:15.733 ************************************ 00:06:15.733 START TEST raid1_resize_test 00:06:15.733 ************************************ 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:15.733 Process raid pid: 59506 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59506 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59506' 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59506 00:06:15.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59506 ']' 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:15.733 09:40:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.994 [2024-10-30 09:40:54.362426] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:15.994 [2024-10-30 09:40:54.362552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.994 [2024-10-30 09:40:54.519232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.254 [2024-10-30 09:40:54.622470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.254 [2024-10-30 09:40:54.762810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.254 [2024-10-30 09:40:54.762850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.826 Base_1 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.826 Base_2 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:16.826 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.827 [2024-10-30 09:40:55.239343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:16.827 [2024-10-30 09:40:55.241265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:16.827 [2024-10-30 09:40:55.241326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:16.827 [2024-10-30 09:40:55.241337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:16.827 [2024-10-30 09:40:55.241583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:16.827 [2024-10-30 09:40:55.241691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:16.827 [2024-10-30 09:40:55.241700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:16.827 [2024-10-30 09:40:55.241830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.827 [2024-10-30 09:40:55.247324] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.827 [2024-10-30 09:40:55.247352] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:16.827 true 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.827 [2024-10-30 09:40:55.259507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.827 [2024-10-30 09:40:55.291324] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.827 [2024-10-30 09:40:55.291350] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:16.827 [2024-10-30 09:40:55.291377] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:16.827 true 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.827 [2024-10-30 09:40:55.303523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59506 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59506 ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 59506 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59506 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59506' 00:06:16.827 killing process with pid 59506 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 59506 00:06:16.827 09:40:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 59506 00:06:16.827 [2024-10-30 09:40:55.350296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:16.827 [2024-10-30 09:40:55.350375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:16.827 [2024-10-30 09:40:55.350823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:16.827 [2024-10-30 09:40:55.350847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:16.827 [2024-10-30 09:40:55.361868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:17.768 09:40:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:17.768 00:06:17.768 real 0m1.779s 00:06:17.768 user 0m1.920s 00:06:17.768 sys 0m0.258s 00:06:17.768 09:40:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.768 09:40:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.768 ************************************ 00:06:17.768 END TEST raid1_resize_test 00:06:17.768 ************************************ 00:06:17.768 09:40:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:17.768 09:40:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:17.769 09:40:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:17.769 09:40:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:17.769 09:40:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.769 09:40:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.769 ************************************ 00:06:17.769 START TEST raid_state_function_test 00:06:17.769 ************************************ 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:17.769 Process raid pid: 59558 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59558 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59558' 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59558 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 59558 ']' 00:06:17.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.769 09:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.769 [2024-10-30 09:40:56.202340] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:17.769 [2024-10-30 09:40:56.202457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.769 [2024-10-30 09:40:56.359867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.031 [2024-10-30 09:40:56.462135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.031 [2024-10-30 09:40:56.607134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.031 [2024-10-30 09:40:56.607177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.603 [2024-10-30 09:40:57.061722] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:18.603 [2024-10-30 09:40:57.061777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:18.603 [2024-10-30 09:40:57.061787] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:18.603 [2024-10-30 09:40:57.061797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:18.603 "name": "Existed_Raid", 00:06:18.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:18.603 "strip_size_kb": 64, 00:06:18.603 "state": "configuring", 00:06:18.603 "raid_level": "raid0", 00:06:18.603 "superblock": false, 00:06:18.603 "num_base_bdevs": 2, 00:06:18.603 "num_base_bdevs_discovered": 0, 00:06:18.603 "num_base_bdevs_operational": 2, 00:06:18.603 "base_bdevs_list": [ 00:06:18.603 { 00:06:18.603 "name": "BaseBdev1", 00:06:18.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:18.603 "is_configured": false, 00:06:18.603 "data_offset": 0, 00:06:18.603 "data_size": 0 00:06:18.603 }, 00:06:18.603 { 00:06:18.603 "name": "BaseBdev2", 00:06:18.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:18.603 "is_configured": false, 00:06:18.603 "data_offset": 0, 00:06:18.603 "data_size": 0 00:06:18.603 } 00:06:18.603 ] 00:06:18.603 }' 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:18.603 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 [2024-10-30 09:40:57.377744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:18.867 [2024-10-30 09:40:57.377773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 [2024-10-30 09:40:57.385745] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:18.867 [2024-10-30 09:40:57.385785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:18.867 [2024-10-30 09:40:57.385794] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:18.867 [2024-10-30 09:40:57.385807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 [2024-10-30 09:40:57.418606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:18.867 BaseBdev1 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 [ 00:06:18.867 { 00:06:18.867 "name": "BaseBdev1", 00:06:18.867 "aliases": [ 00:06:18.867 "0f54c6e6-a1b6-4822-89b6-776055be4051" 00:06:18.867 ], 00:06:18.867 "product_name": "Malloc disk", 00:06:18.867 "block_size": 512, 00:06:18.867 "num_blocks": 65536, 00:06:18.867 "uuid": "0f54c6e6-a1b6-4822-89b6-776055be4051", 00:06:18.867 "assigned_rate_limits": { 00:06:18.867 "rw_ios_per_sec": 0, 00:06:18.867 "rw_mbytes_per_sec": 0, 00:06:18.867 "r_mbytes_per_sec": 0, 00:06:18.867 "w_mbytes_per_sec": 0 00:06:18.867 }, 00:06:18.867 "claimed": true, 00:06:18.867 "claim_type": "exclusive_write", 00:06:18.867 "zoned": false, 00:06:18.867 "supported_io_types": { 00:06:18.867 "read": true, 00:06:18.867 "write": true, 00:06:18.867 "unmap": true, 00:06:18.867 "flush": true, 00:06:18.867 "reset": true, 00:06:18.867 "nvme_admin": false, 00:06:18.867 "nvme_io": false, 00:06:18.867 "nvme_io_md": false, 00:06:18.867 "write_zeroes": true, 00:06:18.867 "zcopy": true, 00:06:18.867 "get_zone_info": false, 00:06:18.867 "zone_management": false, 00:06:18.867 "zone_append": false, 00:06:18.867 "compare": false, 00:06:18.867 "compare_and_write": false, 00:06:18.867 "abort": true, 00:06:18.867 "seek_hole": false, 00:06:18.867 "seek_data": false, 00:06:18.867 "copy": true, 00:06:18.867 "nvme_iov_md": false 00:06:18.867 }, 00:06:18.867 "memory_domains": [ 00:06:18.867 { 00:06:18.867 "dma_device_id": "system", 00:06:18.867 "dma_device_type": 1 00:06:18.867 }, 00:06:18.867 { 00:06:18.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.867 "dma_device_type": 2 00:06:18.867 } 00:06:18.867 ], 00:06:18.867 "driver_specific": {} 00:06:18.867 } 00:06:18.867 ] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:18.867 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:18.868 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:18.868 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.868 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.868 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.868 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:19.145 "name": "Existed_Raid", 00:06:19.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:19.145 "strip_size_kb": 64, 00:06:19.145 "state": "configuring", 00:06:19.145 "raid_level": "raid0", 00:06:19.145 "superblock": false, 00:06:19.145 "num_base_bdevs": 2, 00:06:19.145 "num_base_bdevs_discovered": 1, 00:06:19.145 "num_base_bdevs_operational": 2, 00:06:19.145 "base_bdevs_list": [ 00:06:19.145 { 00:06:19.145 "name": "BaseBdev1", 00:06:19.145 "uuid": "0f54c6e6-a1b6-4822-89b6-776055be4051", 00:06:19.145 "is_configured": true, 00:06:19.145 "data_offset": 0, 00:06:19.145 "data_size": 65536 00:06:19.145 }, 00:06:19.145 { 00:06:19.145 "name": "BaseBdev2", 00:06:19.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:19.145 "is_configured": false, 00:06:19.145 "data_offset": 0, 00:06:19.145 "data_size": 0 00:06:19.145 } 00:06:19.145 ] 00:06:19.145 }' 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.145 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 [2024-10-30 09:40:57.758730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:19.145 [2024-10-30 09:40:57.758773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.407 [2024-10-30 09:40:57.766777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:19.407 [2024-10-30 09:40:57.768751] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:19.407 [2024-10-30 09:40:57.768864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:19.407 "name": "Existed_Raid", 00:06:19.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:19.407 "strip_size_kb": 64, 00:06:19.407 "state": "configuring", 00:06:19.407 "raid_level": "raid0", 00:06:19.407 "superblock": false, 00:06:19.407 "num_base_bdevs": 2, 00:06:19.407 "num_base_bdevs_discovered": 1, 00:06:19.407 "num_base_bdevs_operational": 2, 00:06:19.407 "base_bdevs_list": [ 00:06:19.407 { 00:06:19.407 "name": "BaseBdev1", 00:06:19.407 "uuid": "0f54c6e6-a1b6-4822-89b6-776055be4051", 00:06:19.407 "is_configured": true, 00:06:19.407 "data_offset": 0, 00:06:19.407 "data_size": 65536 00:06:19.407 }, 00:06:19.407 { 00:06:19.407 "name": "BaseBdev2", 00:06:19.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:19.407 "is_configured": false, 00:06:19.407 "data_offset": 0, 00:06:19.407 "data_size": 0 00:06:19.407 } 00:06:19.407 ] 00:06:19.407 }' 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:19.407 09:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.667 [2024-10-30 09:40:58.125524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:19.667 [2024-10-30 09:40:58.125570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:19.667 [2024-10-30 09:40:58.125579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:19.667 [2024-10-30 09:40:58.125835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:19.667 [2024-10-30 09:40:58.125970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:19.667 [2024-10-30 09:40:58.125982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:19.667 [2024-10-30 09:40:58.126235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.667 BaseBdev2 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.667 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.667 [ 00:06:19.667 { 00:06:19.667 "name": "BaseBdev2", 00:06:19.667 "aliases": [ 00:06:19.667 "e7cc6c79-6295-40dc-8455-fcc02c6814e6" 00:06:19.667 ], 00:06:19.667 "product_name": "Malloc disk", 00:06:19.667 "block_size": 512, 00:06:19.667 "num_blocks": 65536, 00:06:19.667 "uuid": "e7cc6c79-6295-40dc-8455-fcc02c6814e6", 00:06:19.667 "assigned_rate_limits": { 00:06:19.667 "rw_ios_per_sec": 0, 00:06:19.667 "rw_mbytes_per_sec": 0, 00:06:19.667 "r_mbytes_per_sec": 0, 00:06:19.667 "w_mbytes_per_sec": 0 00:06:19.667 }, 00:06:19.667 "claimed": true, 00:06:19.667 "claim_type": "exclusive_write", 00:06:19.667 "zoned": false, 00:06:19.667 "supported_io_types": { 00:06:19.667 "read": true, 00:06:19.667 "write": true, 00:06:19.667 "unmap": true, 00:06:19.667 "flush": true, 00:06:19.667 "reset": true, 00:06:19.667 "nvme_admin": false, 00:06:19.667 "nvme_io": false, 00:06:19.667 "nvme_io_md": false, 00:06:19.667 "write_zeroes": true, 00:06:19.667 "zcopy": true, 00:06:19.667 "get_zone_info": false, 00:06:19.667 "zone_management": false, 00:06:19.667 "zone_append": false, 00:06:19.667 "compare": false, 00:06:19.667 "compare_and_write": false, 00:06:19.667 "abort": true, 00:06:19.667 "seek_hole": false, 00:06:19.667 "seek_data": false, 00:06:19.667 "copy": true, 00:06:19.667 "nvme_iov_md": false 00:06:19.667 }, 00:06:19.667 "memory_domains": [ 00:06:19.667 { 00:06:19.667 "dma_device_id": "system", 00:06:19.667 "dma_device_type": 1 00:06:19.667 }, 00:06:19.667 { 00:06:19.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.667 "dma_device_type": 2 00:06:19.667 } 00:06:19.667 ], 00:06:19.667 "driver_specific": {} 00:06:19.667 } 00:06:19.667 ] 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:19.668 "name": "Existed_Raid", 00:06:19.668 "uuid": "4cb6894d-673e-4c23-acee-484df654e2cb", 00:06:19.668 "strip_size_kb": 64, 00:06:19.668 "state": "online", 00:06:19.668 "raid_level": "raid0", 00:06:19.668 "superblock": false, 00:06:19.668 "num_base_bdevs": 2, 00:06:19.668 "num_base_bdevs_discovered": 2, 00:06:19.668 "num_base_bdevs_operational": 2, 00:06:19.668 "base_bdevs_list": [ 00:06:19.668 { 00:06:19.668 "name": "BaseBdev1", 00:06:19.668 "uuid": "0f54c6e6-a1b6-4822-89b6-776055be4051", 00:06:19.668 "is_configured": true, 00:06:19.668 "data_offset": 0, 00:06:19.668 "data_size": 65536 00:06:19.668 }, 00:06:19.668 { 00:06:19.668 "name": "BaseBdev2", 00:06:19.668 "uuid": "e7cc6c79-6295-40dc-8455-fcc02c6814e6", 00:06:19.668 "is_configured": true, 00:06:19.668 "data_offset": 0, 00:06:19.668 "data_size": 65536 00:06:19.668 } 00:06:19.668 ] 00:06:19.668 }' 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:19.668 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:19.928 [2024-10-30 09:40:58.477962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.928 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:19.928 "name": "Existed_Raid", 00:06:19.928 "aliases": [ 00:06:19.929 "4cb6894d-673e-4c23-acee-484df654e2cb" 00:06:19.929 ], 00:06:19.929 "product_name": "Raid Volume", 00:06:19.929 "block_size": 512, 00:06:19.929 "num_blocks": 131072, 00:06:19.929 "uuid": "4cb6894d-673e-4c23-acee-484df654e2cb", 00:06:19.929 "assigned_rate_limits": { 00:06:19.929 "rw_ios_per_sec": 0, 00:06:19.929 "rw_mbytes_per_sec": 0, 00:06:19.929 "r_mbytes_per_sec": 0, 00:06:19.929 "w_mbytes_per_sec": 0 00:06:19.929 }, 00:06:19.929 "claimed": false, 00:06:19.929 "zoned": false, 00:06:19.929 "supported_io_types": { 00:06:19.929 "read": true, 00:06:19.929 "write": true, 00:06:19.929 "unmap": true, 00:06:19.929 "flush": true, 00:06:19.929 "reset": true, 00:06:19.929 "nvme_admin": false, 00:06:19.929 "nvme_io": false, 00:06:19.929 "nvme_io_md": false, 00:06:19.929 "write_zeroes": true, 00:06:19.929 "zcopy": false, 00:06:19.929 "get_zone_info": false, 00:06:19.929 "zone_management": false, 00:06:19.929 "zone_append": false, 00:06:19.929 "compare": false, 00:06:19.929 "compare_and_write": false, 00:06:19.929 "abort": false, 00:06:19.929 "seek_hole": false, 00:06:19.929 "seek_data": false, 00:06:19.929 "copy": false, 00:06:19.929 "nvme_iov_md": false 00:06:19.929 }, 00:06:19.929 "memory_domains": [ 00:06:19.929 { 00:06:19.929 "dma_device_id": "system", 00:06:19.929 "dma_device_type": 1 00:06:19.929 }, 00:06:19.929 { 00:06:19.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.929 "dma_device_type": 2 00:06:19.929 }, 00:06:19.929 { 00:06:19.929 "dma_device_id": "system", 00:06:19.929 "dma_device_type": 1 00:06:19.929 }, 00:06:19.929 { 00:06:19.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.929 "dma_device_type": 2 00:06:19.929 } 00:06:19.929 ], 00:06:19.929 "driver_specific": { 00:06:19.929 "raid": { 00:06:19.929 "uuid": "4cb6894d-673e-4c23-acee-484df654e2cb", 00:06:19.929 "strip_size_kb": 64, 00:06:19.929 "state": "online", 00:06:19.929 "raid_level": "raid0", 00:06:19.929 "superblock": false, 00:06:19.929 "num_base_bdevs": 2, 00:06:19.929 "num_base_bdevs_discovered": 2, 00:06:19.929 "num_base_bdevs_operational": 2, 00:06:19.929 "base_bdevs_list": [ 00:06:19.929 { 00:06:19.929 "name": "BaseBdev1", 00:06:19.929 "uuid": "0f54c6e6-a1b6-4822-89b6-776055be4051", 00:06:19.929 "is_configured": true, 00:06:19.929 "data_offset": 0, 00:06:19.929 "data_size": 65536 00:06:19.929 }, 00:06:19.929 { 00:06:19.929 "name": "BaseBdev2", 00:06:19.929 "uuid": "e7cc6c79-6295-40dc-8455-fcc02c6814e6", 00:06:19.929 "is_configured": true, 00:06:19.929 "data_offset": 0, 00:06:19.929 "data_size": 65536 00:06:19.929 } 00:06:19.929 ] 00:06:19.929 } 00:06:19.929 } 00:06:19.929 }' 00:06:19.929 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:19.929 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:19.929 BaseBdev2' 00:06:19.929 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:20.189 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.190 [2024-10-30 09:40:58.633734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:20.190 [2024-10-30 09:40:58.633763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:20.190 [2024-10-30 09:40:58.633810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:20.190 "name": "Existed_Raid", 00:06:20.190 "uuid": "4cb6894d-673e-4c23-acee-484df654e2cb", 00:06:20.190 "strip_size_kb": 64, 00:06:20.190 "state": "offline", 00:06:20.190 "raid_level": "raid0", 00:06:20.190 "superblock": false, 00:06:20.190 "num_base_bdevs": 2, 00:06:20.190 "num_base_bdevs_discovered": 1, 00:06:20.190 "num_base_bdevs_operational": 1, 00:06:20.190 "base_bdevs_list": [ 00:06:20.190 { 00:06:20.190 "name": null, 00:06:20.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:20.190 "is_configured": false, 00:06:20.190 "data_offset": 0, 00:06:20.190 "data_size": 65536 00:06:20.190 }, 00:06:20.190 { 00:06:20.190 "name": "BaseBdev2", 00:06:20.190 "uuid": "e7cc6c79-6295-40dc-8455-fcc02c6814e6", 00:06:20.190 "is_configured": true, 00:06:20.190 "data_offset": 0, 00:06:20.190 "data_size": 65536 00:06:20.190 } 00:06:20.190 ] 00:06:20.190 }' 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:20.190 09:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.451 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 [2024-10-30 09:40:59.048537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:20.451 [2024-10-30 09:40:59.048584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59558 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 59558 ']' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 59558 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59558 00:06:20.713 killing process with pid 59558 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59558' 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 59558 00:06:20.713 [2024-10-30 09:40:59.168918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:20.713 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 59558 00:06:20.713 [2024-10-30 09:40:59.179485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:21.285 ************************************ 00:06:21.285 END TEST raid_state_function_test 00:06:21.285 ************************************ 00:06:21.285 09:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:21.285 00:06:21.285 real 0m3.734s 00:06:21.285 user 0m5.401s 00:06:21.285 sys 0m0.560s 00:06:21.285 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.285 09:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.546 09:40:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:21.546 09:40:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:21.546 09:40:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.546 09:40:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:21.546 ************************************ 00:06:21.546 START TEST raid_state_function_test_sb 00:06:21.546 ************************************ 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:21.546 Process raid pid: 59794 00:06:21.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59794 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59794' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59794 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 59794 ']' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.546 09:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:21.546 [2024-10-30 09:41:00.018342] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:21.546 [2024-10-30 09:41:00.018503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.807 [2024-10-30 09:41:00.192720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.807 [2024-10-30 09:41:00.290659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.069 [2024-10-30 09:41:00.426634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.069 [2024-10-30 09:41:00.426826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.330 [2024-10-30 09:41:00.851853] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:22.330 [2024-10-30 09:41:00.851900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:22.330 [2024-10-30 09:41:00.851911] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:22.330 [2024-10-30 09:41:00.851922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.330 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:22.330 "name": "Existed_Raid", 00:06:22.330 "uuid": "1856f6dc-9156-425e-9a92-ae4fc7a880f8", 00:06:22.330 "strip_size_kb": 64, 00:06:22.330 "state": "configuring", 00:06:22.330 "raid_level": "raid0", 00:06:22.330 "superblock": true, 00:06:22.330 "num_base_bdevs": 2, 00:06:22.330 "num_base_bdevs_discovered": 0, 00:06:22.330 "num_base_bdevs_operational": 2, 00:06:22.330 "base_bdevs_list": [ 00:06:22.330 { 00:06:22.330 "name": "BaseBdev1", 00:06:22.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.330 "is_configured": false, 00:06:22.330 "data_offset": 0, 00:06:22.330 "data_size": 0 00:06:22.330 }, 00:06:22.331 { 00:06:22.331 "name": "BaseBdev2", 00:06:22.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.331 "is_configured": false, 00:06:22.331 "data_offset": 0, 00:06:22.331 "data_size": 0 00:06:22.331 } 00:06:22.331 ] 00:06:22.331 }' 00:06:22.331 09:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:22.331 09:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 [2024-10-30 09:41:01.163879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:22.592 [2024-10-30 09:41:01.163909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 [2024-10-30 09:41:01.171890] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:22.592 [2024-10-30 09:41:01.171926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:22.592 [2024-10-30 09:41:01.171935] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:22.592 [2024-10-30 09:41:01.171947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 [2024-10-30 09:41:01.204271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:22.592 BaseBdev1 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.592 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.852 [ 00:06:22.852 { 00:06:22.852 "name": "BaseBdev1", 00:06:22.852 "aliases": [ 00:06:22.852 "98c30a6c-8abc-4ab2-930d-c4a92db06664" 00:06:22.852 ], 00:06:22.852 "product_name": "Malloc disk", 00:06:22.852 "block_size": 512, 00:06:22.852 "num_blocks": 65536, 00:06:22.852 "uuid": "98c30a6c-8abc-4ab2-930d-c4a92db06664", 00:06:22.852 "assigned_rate_limits": { 00:06:22.852 "rw_ios_per_sec": 0, 00:06:22.852 "rw_mbytes_per_sec": 0, 00:06:22.852 "r_mbytes_per_sec": 0, 00:06:22.852 "w_mbytes_per_sec": 0 00:06:22.852 }, 00:06:22.852 "claimed": true, 00:06:22.852 "claim_type": "exclusive_write", 00:06:22.852 "zoned": false, 00:06:22.852 "supported_io_types": { 00:06:22.852 "read": true, 00:06:22.852 "write": true, 00:06:22.852 "unmap": true, 00:06:22.852 "flush": true, 00:06:22.852 "reset": true, 00:06:22.852 "nvme_admin": false, 00:06:22.852 "nvme_io": false, 00:06:22.852 "nvme_io_md": false, 00:06:22.852 "write_zeroes": true, 00:06:22.852 "zcopy": true, 00:06:22.852 "get_zone_info": false, 00:06:22.852 "zone_management": false, 00:06:22.852 "zone_append": false, 00:06:22.852 "compare": false, 00:06:22.852 "compare_and_write": false, 00:06:22.852 "abort": true, 00:06:22.852 "seek_hole": false, 00:06:22.852 "seek_data": false, 00:06:22.852 "copy": true, 00:06:22.852 "nvme_iov_md": false 00:06:22.852 }, 00:06:22.852 "memory_domains": [ 00:06:22.852 { 00:06:22.852 "dma_device_id": "system", 00:06:22.852 "dma_device_type": 1 00:06:22.852 }, 00:06:22.852 { 00:06:22.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.852 "dma_device_type": 2 00:06:22.852 } 00:06:22.852 ], 00:06:22.852 "driver_specific": {} 00:06:22.852 } 00:06:22.852 ] 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.852 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:22.853 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.853 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:22.853 "name": "Existed_Raid", 00:06:22.853 "uuid": "dfa4a065-789e-46d2-baeb-9bcfe1399a73", 00:06:22.853 "strip_size_kb": 64, 00:06:22.853 "state": "configuring", 00:06:22.853 "raid_level": "raid0", 00:06:22.853 "superblock": true, 00:06:22.853 "num_base_bdevs": 2, 00:06:22.853 "num_base_bdevs_discovered": 1, 00:06:22.853 "num_base_bdevs_operational": 2, 00:06:22.853 "base_bdevs_list": [ 00:06:22.853 { 00:06:22.853 "name": "BaseBdev1", 00:06:22.853 "uuid": "98c30a6c-8abc-4ab2-930d-c4a92db06664", 00:06:22.853 "is_configured": true, 00:06:22.853 "data_offset": 2048, 00:06:22.853 "data_size": 63488 00:06:22.853 }, 00:06:22.853 { 00:06:22.853 "name": "BaseBdev2", 00:06:22.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:22.853 "is_configured": false, 00:06:22.853 "data_offset": 0, 00:06:22.853 "data_size": 0 00:06:22.853 } 00:06:22.853 ] 00:06:22.853 }' 00:06:22.853 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:22.853 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.114 [2024-10-30 09:41:01.552378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:23.114 [2024-10-30 09:41:01.552518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.114 [2024-10-30 09:41:01.560447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:23.114 [2024-10-30 09:41:01.562308] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:23.114 [2024-10-30 09:41:01.562432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:23.114 "name": "Existed_Raid", 00:06:23.114 "uuid": "db31c13c-e331-43da-8cba-d10ef4e28fd4", 00:06:23.114 "strip_size_kb": 64, 00:06:23.114 "state": "configuring", 00:06:23.114 "raid_level": "raid0", 00:06:23.114 "superblock": true, 00:06:23.114 "num_base_bdevs": 2, 00:06:23.114 "num_base_bdevs_discovered": 1, 00:06:23.114 "num_base_bdevs_operational": 2, 00:06:23.114 "base_bdevs_list": [ 00:06:23.114 { 00:06:23.114 "name": "BaseBdev1", 00:06:23.114 "uuid": "98c30a6c-8abc-4ab2-930d-c4a92db06664", 00:06:23.114 "is_configured": true, 00:06:23.114 "data_offset": 2048, 00:06:23.114 "data_size": 63488 00:06:23.114 }, 00:06:23.114 { 00:06:23.114 "name": "BaseBdev2", 00:06:23.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.114 "is_configured": false, 00:06:23.114 "data_offset": 0, 00:06:23.114 "data_size": 0 00:06:23.114 } 00:06:23.114 ] 00:06:23.114 }' 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:23.114 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 [2024-10-30 09:41:01.882913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:23.375 [2024-10-30 09:41:01.883127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:23.375 [2024-10-30 09:41:01.883140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:23.375 [2024-10-30 09:41:01.883391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:23.375 [2024-10-30 09:41:01.883522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:23.375 [2024-10-30 09:41:01.883532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:23.375 [2024-10-30 09:41:01.883651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.375 BaseBdev2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 [ 00:06:23.375 { 00:06:23.375 "name": "BaseBdev2", 00:06:23.375 "aliases": [ 00:06:23.375 "5fbd12fa-eead-4819-9cf3-512fc2401575" 00:06:23.375 ], 00:06:23.375 "product_name": "Malloc disk", 00:06:23.375 "block_size": 512, 00:06:23.375 "num_blocks": 65536, 00:06:23.375 "uuid": "5fbd12fa-eead-4819-9cf3-512fc2401575", 00:06:23.375 "assigned_rate_limits": { 00:06:23.375 "rw_ios_per_sec": 0, 00:06:23.375 "rw_mbytes_per_sec": 0, 00:06:23.375 "r_mbytes_per_sec": 0, 00:06:23.375 "w_mbytes_per_sec": 0 00:06:23.375 }, 00:06:23.375 "claimed": true, 00:06:23.375 "claim_type": "exclusive_write", 00:06:23.375 "zoned": false, 00:06:23.375 "supported_io_types": { 00:06:23.375 "read": true, 00:06:23.375 "write": true, 00:06:23.375 "unmap": true, 00:06:23.375 "flush": true, 00:06:23.375 "reset": true, 00:06:23.375 "nvme_admin": false, 00:06:23.375 "nvme_io": false, 00:06:23.375 "nvme_io_md": false, 00:06:23.375 "write_zeroes": true, 00:06:23.375 "zcopy": true, 00:06:23.375 "get_zone_info": false, 00:06:23.375 "zone_management": false, 00:06:23.375 "zone_append": false, 00:06:23.375 "compare": false, 00:06:23.375 "compare_and_write": false, 00:06:23.375 "abort": true, 00:06:23.375 "seek_hole": false, 00:06:23.375 "seek_data": false, 00:06:23.375 "copy": true, 00:06:23.375 "nvme_iov_md": false 00:06:23.375 }, 00:06:23.375 "memory_domains": [ 00:06:23.375 { 00:06:23.375 "dma_device_id": "system", 00:06:23.375 "dma_device_type": 1 00:06:23.375 }, 00:06:23.375 { 00:06:23.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.375 "dma_device_type": 2 00:06:23.375 } 00:06:23.375 ], 00:06:23.375 "driver_specific": {} 00:06:23.375 } 00:06:23.375 ] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:23.375 "name": "Existed_Raid", 00:06:23.375 "uuid": "db31c13c-e331-43da-8cba-d10ef4e28fd4", 00:06:23.375 "strip_size_kb": 64, 00:06:23.375 "state": "online", 00:06:23.375 "raid_level": "raid0", 00:06:23.375 "superblock": true, 00:06:23.375 "num_base_bdevs": 2, 00:06:23.375 "num_base_bdevs_discovered": 2, 00:06:23.375 "num_base_bdevs_operational": 2, 00:06:23.375 "base_bdevs_list": [ 00:06:23.375 { 00:06:23.375 "name": "BaseBdev1", 00:06:23.375 "uuid": "98c30a6c-8abc-4ab2-930d-c4a92db06664", 00:06:23.375 "is_configured": true, 00:06:23.375 "data_offset": 2048, 00:06:23.375 "data_size": 63488 00:06:23.375 }, 00:06:23.375 { 00:06:23.375 "name": "BaseBdev2", 00:06:23.375 "uuid": "5fbd12fa-eead-4819-9cf3-512fc2401575", 00:06:23.375 "is_configured": true, 00:06:23.375 "data_offset": 2048, 00:06:23.375 "data_size": 63488 00:06:23.375 } 00:06:23.375 ] 00:06:23.375 }' 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:23.375 09:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.636 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.636 [2024-10-30 09:41:02.243340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.899 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.899 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:23.899 "name": "Existed_Raid", 00:06:23.899 "aliases": [ 00:06:23.899 "db31c13c-e331-43da-8cba-d10ef4e28fd4" 00:06:23.899 ], 00:06:23.899 "product_name": "Raid Volume", 00:06:23.899 "block_size": 512, 00:06:23.899 "num_blocks": 126976, 00:06:23.899 "uuid": "db31c13c-e331-43da-8cba-d10ef4e28fd4", 00:06:23.899 "assigned_rate_limits": { 00:06:23.899 "rw_ios_per_sec": 0, 00:06:23.899 "rw_mbytes_per_sec": 0, 00:06:23.899 "r_mbytes_per_sec": 0, 00:06:23.899 "w_mbytes_per_sec": 0 00:06:23.899 }, 00:06:23.899 "claimed": false, 00:06:23.899 "zoned": false, 00:06:23.899 "supported_io_types": { 00:06:23.899 "read": true, 00:06:23.899 "write": true, 00:06:23.899 "unmap": true, 00:06:23.899 "flush": true, 00:06:23.899 "reset": true, 00:06:23.899 "nvme_admin": false, 00:06:23.899 "nvme_io": false, 00:06:23.899 "nvme_io_md": false, 00:06:23.899 "write_zeroes": true, 00:06:23.899 "zcopy": false, 00:06:23.899 "get_zone_info": false, 00:06:23.899 "zone_management": false, 00:06:23.899 "zone_append": false, 00:06:23.899 "compare": false, 00:06:23.899 "compare_and_write": false, 00:06:23.899 "abort": false, 00:06:23.899 "seek_hole": false, 00:06:23.899 "seek_data": false, 00:06:23.899 "copy": false, 00:06:23.899 "nvme_iov_md": false 00:06:23.899 }, 00:06:23.899 "memory_domains": [ 00:06:23.899 { 00:06:23.899 "dma_device_id": "system", 00:06:23.899 "dma_device_type": 1 00:06:23.899 }, 00:06:23.899 { 00:06:23.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.899 "dma_device_type": 2 00:06:23.899 }, 00:06:23.899 { 00:06:23.899 "dma_device_id": "system", 00:06:23.899 "dma_device_type": 1 00:06:23.899 }, 00:06:23.899 { 00:06:23.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.899 "dma_device_type": 2 00:06:23.899 } 00:06:23.900 ], 00:06:23.900 "driver_specific": { 00:06:23.900 "raid": { 00:06:23.900 "uuid": "db31c13c-e331-43da-8cba-d10ef4e28fd4", 00:06:23.900 "strip_size_kb": 64, 00:06:23.900 "state": "online", 00:06:23.900 "raid_level": "raid0", 00:06:23.900 "superblock": true, 00:06:23.900 "num_base_bdevs": 2, 00:06:23.900 "num_base_bdevs_discovered": 2, 00:06:23.900 "num_base_bdevs_operational": 2, 00:06:23.900 "base_bdevs_list": [ 00:06:23.900 { 00:06:23.900 "name": "BaseBdev1", 00:06:23.900 "uuid": "98c30a6c-8abc-4ab2-930d-c4a92db06664", 00:06:23.900 "is_configured": true, 00:06:23.900 "data_offset": 2048, 00:06:23.900 "data_size": 63488 00:06:23.900 }, 00:06:23.900 { 00:06:23.900 "name": "BaseBdev2", 00:06:23.900 "uuid": "5fbd12fa-eead-4819-9cf3-512fc2401575", 00:06:23.900 "is_configured": true, 00:06:23.900 "data_offset": 2048, 00:06:23.900 "data_size": 63488 00:06:23.900 } 00:06:23.900 ] 00:06:23.900 } 00:06:23.900 } 00:06:23.900 }' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:23.900 BaseBdev2' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.900 [2024-10-30 09:41:02.407133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:23.900 [2024-10-30 09:41:02.407243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:23.900 [2024-10-30 09:41:02.407342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:23.900 "name": "Existed_Raid", 00:06:23.900 "uuid": "db31c13c-e331-43da-8cba-d10ef4e28fd4", 00:06:23.900 "strip_size_kb": 64, 00:06:23.900 "state": "offline", 00:06:23.900 "raid_level": "raid0", 00:06:23.900 "superblock": true, 00:06:23.900 "num_base_bdevs": 2, 00:06:23.900 "num_base_bdevs_discovered": 1, 00:06:23.900 "num_base_bdevs_operational": 1, 00:06:23.900 "base_bdevs_list": [ 00:06:23.900 { 00:06:23.900 "name": null, 00:06:23.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.900 "is_configured": false, 00:06:23.900 "data_offset": 0, 00:06:23.900 "data_size": 63488 00:06:23.900 }, 00:06:23.900 { 00:06:23.900 "name": "BaseBdev2", 00:06:23.900 "uuid": "5fbd12fa-eead-4819-9cf3-512fc2401575", 00:06:23.900 "is_configured": true, 00:06:23.900 "data_offset": 2048, 00:06:23.900 "data_size": 63488 00:06:23.900 } 00:06:23.900 ] 00:06:23.900 }' 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:23.900 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:24.472 [2024-10-30 09:41:02.832939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:24.472 [2024-10-30 09:41:02.833088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59794 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 59794 ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 59794 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59794 00:06:24.472 killing process with pid 59794 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59794' 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 59794 00:06:24.472 [2024-10-30 09:41:02.947512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:24.472 09:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 59794 00:06:24.472 [2024-10-30 09:41:02.957990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:25.045 09:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:25.045 00:06:25.045 real 0m3.728s 00:06:25.045 user 0m5.421s 00:06:25.045 sys 0m0.516s 00:06:25.045 ************************************ 00:06:25.045 END TEST raid_state_function_test_sb 00:06:25.045 ************************************ 00:06:25.045 09:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.045 09:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 09:41:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:25.307 09:41:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:25.307 09:41:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.307 09:41:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 ************************************ 00:06:25.307 START TEST raid_superblock_test 00:06:25.307 ************************************ 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60038 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60038 00:06:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60038 ']' 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.307 09:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 [2024-10-30 09:41:03.787963] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:25.307 [2024-10-30 09:41:03.788115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60038 ] 00:06:25.568 [2024-10-30 09:41:03.946785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.568 [2024-10-30 09:41:04.046054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.568 [2024-10-30 09:41:04.181490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.568 [2024-10-30 09:41:04.181687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.139 malloc1 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.139 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.139 [2024-10-30 09:41:04.670099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:26.139 [2024-10-30 09:41:04.670156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.139 [2024-10-30 09:41:04.670178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:26.140 [2024-10-30 09:41:04.670189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.140 [2024-10-30 09:41:04.672313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.140 [2024-10-30 09:41:04.672440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:26.140 pt1 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.140 malloc2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.140 [2024-10-30 09:41:04.710315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:26.140 [2024-10-30 09:41:04.710357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.140 [2024-10-30 09:41:04.710379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:26.140 [2024-10-30 09:41:04.710389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.140 [2024-10-30 09:41:04.712464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.140 [2024-10-30 09:41:04.712494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:26.140 pt2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.140 [2024-10-30 09:41:04.718362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:26.140 [2024-10-30 09:41:04.720232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:26.140 [2024-10-30 09:41:04.720380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:26.140 [2024-10-30 09:41:04.720391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:26.140 [2024-10-30 09:41:04.720638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:26.140 [2024-10-30 09:41:04.720769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:26.140 [2024-10-30 09:41:04.720780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:26.140 [2024-10-30 09:41:04.720913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.140 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.401 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:26.401 "name": "raid_bdev1", 00:06:26.401 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:26.401 "strip_size_kb": 64, 00:06:26.401 "state": "online", 00:06:26.401 "raid_level": "raid0", 00:06:26.401 "superblock": true, 00:06:26.401 "num_base_bdevs": 2, 00:06:26.401 "num_base_bdevs_discovered": 2, 00:06:26.401 "num_base_bdevs_operational": 2, 00:06:26.401 "base_bdevs_list": [ 00:06:26.401 { 00:06:26.401 "name": "pt1", 00:06:26.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:26.401 "is_configured": true, 00:06:26.401 "data_offset": 2048, 00:06:26.401 "data_size": 63488 00:06:26.401 }, 00:06:26.401 { 00:06:26.401 "name": "pt2", 00:06:26.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:26.401 "is_configured": true, 00:06:26.401 "data_offset": 2048, 00:06:26.401 "data_size": 63488 00:06:26.401 } 00:06:26.401 ] 00:06:26.401 }' 00:06:26.401 09:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:26.401 09:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 [2024-10-30 09:41:05.034683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:26.663 "name": "raid_bdev1", 00:06:26.663 "aliases": [ 00:06:26.663 "039d6e9c-a0be-498c-93ca-44b6e532f99e" 00:06:26.663 ], 00:06:26.663 "product_name": "Raid Volume", 00:06:26.663 "block_size": 512, 00:06:26.663 "num_blocks": 126976, 00:06:26.663 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:26.663 "assigned_rate_limits": { 00:06:26.663 "rw_ios_per_sec": 0, 00:06:26.663 "rw_mbytes_per_sec": 0, 00:06:26.663 "r_mbytes_per_sec": 0, 00:06:26.663 "w_mbytes_per_sec": 0 00:06:26.663 }, 00:06:26.663 "claimed": false, 00:06:26.663 "zoned": false, 00:06:26.663 "supported_io_types": { 00:06:26.663 "read": true, 00:06:26.663 "write": true, 00:06:26.663 "unmap": true, 00:06:26.663 "flush": true, 00:06:26.663 "reset": true, 00:06:26.663 "nvme_admin": false, 00:06:26.663 "nvme_io": false, 00:06:26.663 "nvme_io_md": false, 00:06:26.663 "write_zeroes": true, 00:06:26.663 "zcopy": false, 00:06:26.663 "get_zone_info": false, 00:06:26.663 "zone_management": false, 00:06:26.663 "zone_append": false, 00:06:26.663 "compare": false, 00:06:26.663 "compare_and_write": false, 00:06:26.663 "abort": false, 00:06:26.663 "seek_hole": false, 00:06:26.663 "seek_data": false, 00:06:26.663 "copy": false, 00:06:26.663 "nvme_iov_md": false 00:06:26.663 }, 00:06:26.663 "memory_domains": [ 00:06:26.663 { 00:06:26.663 "dma_device_id": "system", 00:06:26.663 "dma_device_type": 1 00:06:26.663 }, 00:06:26.663 { 00:06:26.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.663 "dma_device_type": 2 00:06:26.663 }, 00:06:26.663 { 00:06:26.663 "dma_device_id": "system", 00:06:26.663 "dma_device_type": 1 00:06:26.663 }, 00:06:26.663 { 00:06:26.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.663 "dma_device_type": 2 00:06:26.663 } 00:06:26.663 ], 00:06:26.663 "driver_specific": { 00:06:26.663 "raid": { 00:06:26.663 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:26.663 "strip_size_kb": 64, 00:06:26.663 "state": "online", 00:06:26.663 "raid_level": "raid0", 00:06:26.663 "superblock": true, 00:06:26.663 "num_base_bdevs": 2, 00:06:26.663 "num_base_bdevs_discovered": 2, 00:06:26.663 "num_base_bdevs_operational": 2, 00:06:26.663 "base_bdevs_list": [ 00:06:26.663 { 00:06:26.663 "name": "pt1", 00:06:26.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:26.663 "is_configured": true, 00:06:26.663 "data_offset": 2048, 00:06:26.663 "data_size": 63488 00:06:26.663 }, 00:06:26.663 { 00:06:26.663 "name": "pt2", 00:06:26.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:26.663 "is_configured": true, 00:06:26.663 "data_offset": 2048, 00:06:26.663 "data_size": 63488 00:06:26.663 } 00:06:26.663 ] 00:06:26.663 } 00:06:26.663 } 00:06:26.663 }' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:26.663 pt2' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:26.663 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.664 [2024-10-30 09:41:05.206746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=039d6e9c-a0be-498c-93ca-44b6e532f99e 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 039d6e9c-a0be-498c-93ca-44b6e532f99e ']' 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.664 [2024-10-30 09:41:05.230428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:26.664 [2024-10-30 09:41:05.230449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:26.664 [2024-10-30 09:41:05.230522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.664 [2024-10-30 09:41:05.230569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.664 [2024-10-30 09:41:05.230580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.664 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 [2024-10-30 09:41:05.326482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:26.926 [2024-10-30 09:41:05.328363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:26.926 [2024-10-30 09:41:05.328425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:26.926 [2024-10-30 09:41:05.328469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:26.926 [2024-10-30 09:41:05.328484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:26.926 [2024-10-30 09:41:05.328497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:26.926 request: 00:06:26.926 { 00:06:26.926 "name": "raid_bdev1", 00:06:26.926 "raid_level": "raid0", 00:06:26.926 "base_bdevs": [ 00:06:26.926 "malloc1", 00:06:26.926 "malloc2" 00:06:26.926 ], 00:06:26.926 "strip_size_kb": 64, 00:06:26.926 "superblock": false, 00:06:26.926 "method": "bdev_raid_create", 00:06:26.926 "req_id": 1 00:06:26.926 } 00:06:26.926 Got JSON-RPC error response 00:06:26.926 response: 00:06:26.926 { 00:06:26.926 "code": -17, 00:06:26.926 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:26.926 } 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:26.926 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.927 [2024-10-30 09:41:05.370466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:26.927 [2024-10-30 09:41:05.370509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.927 [2024-10-30 09:41:05.370525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:26.927 [2024-10-30 09:41:05.370535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.927 [2024-10-30 09:41:05.372670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.927 [2024-10-30 09:41:05.372779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:26.927 [2024-10-30 09:41:05.372899] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:26.927 [2024-10-30 09:41:05.372953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:26.927 pt1 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:26.927 "name": "raid_bdev1", 00:06:26.927 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:26.927 "strip_size_kb": 64, 00:06:26.927 "state": "configuring", 00:06:26.927 "raid_level": "raid0", 00:06:26.927 "superblock": true, 00:06:26.927 "num_base_bdevs": 2, 00:06:26.927 "num_base_bdevs_discovered": 1, 00:06:26.927 "num_base_bdevs_operational": 2, 00:06:26.927 "base_bdevs_list": [ 00:06:26.927 { 00:06:26.927 "name": "pt1", 00:06:26.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:26.927 "is_configured": true, 00:06:26.927 "data_offset": 2048, 00:06:26.927 "data_size": 63488 00:06:26.927 }, 00:06:26.927 { 00:06:26.927 "name": null, 00:06:26.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:26.927 "is_configured": false, 00:06:26.927 "data_offset": 2048, 00:06:26.927 "data_size": 63488 00:06:26.927 } 00:06:26.927 ] 00:06:26.927 }' 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:26.927 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.190 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.190 [2024-10-30 09:41:05.682571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:27.190 [2024-10-30 09:41:05.682789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.190 [2024-10-30 09:41:05.682843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:27.190 [2024-10-30 09:41:05.682890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.190 [2024-10-30 09:41:05.683354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.190 [2024-10-30 09:41:05.683428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:27.191 [2024-10-30 09:41:05.683531] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:27.191 [2024-10-30 09:41:05.683554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:27.191 [2024-10-30 09:41:05.683657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:27.191 [2024-10-30 09:41:05.683675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:27.191 [2024-10-30 09:41:05.683903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:27.191 [2024-10-30 09:41:05.684021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:27.191 [2024-10-30 09:41:05.684029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:27.191 [2024-10-30 09:41:05.684182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:27.191 pt2 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:27.191 "name": "raid_bdev1", 00:06:27.191 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:27.191 "strip_size_kb": 64, 00:06:27.191 "state": "online", 00:06:27.191 "raid_level": "raid0", 00:06:27.191 "superblock": true, 00:06:27.191 "num_base_bdevs": 2, 00:06:27.191 "num_base_bdevs_discovered": 2, 00:06:27.191 "num_base_bdevs_operational": 2, 00:06:27.191 "base_bdevs_list": [ 00:06:27.191 { 00:06:27.191 "name": "pt1", 00:06:27.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:27.191 "is_configured": true, 00:06:27.191 "data_offset": 2048, 00:06:27.191 "data_size": 63488 00:06:27.191 }, 00:06:27.191 { 00:06:27.191 "name": "pt2", 00:06:27.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:27.191 "is_configured": true, 00:06:27.191 "data_offset": 2048, 00:06:27.191 "data_size": 63488 00:06:27.191 } 00:06:27.191 ] 00:06:27.191 }' 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:27.191 09:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.452 [2024-10-30 09:41:06.026914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.452 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:27.452 "name": "raid_bdev1", 00:06:27.452 "aliases": [ 00:06:27.452 "039d6e9c-a0be-498c-93ca-44b6e532f99e" 00:06:27.452 ], 00:06:27.452 "product_name": "Raid Volume", 00:06:27.452 "block_size": 512, 00:06:27.452 "num_blocks": 126976, 00:06:27.452 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:27.452 "assigned_rate_limits": { 00:06:27.452 "rw_ios_per_sec": 0, 00:06:27.452 "rw_mbytes_per_sec": 0, 00:06:27.452 "r_mbytes_per_sec": 0, 00:06:27.452 "w_mbytes_per_sec": 0 00:06:27.452 }, 00:06:27.452 "claimed": false, 00:06:27.452 "zoned": false, 00:06:27.452 "supported_io_types": { 00:06:27.452 "read": true, 00:06:27.452 "write": true, 00:06:27.452 "unmap": true, 00:06:27.452 "flush": true, 00:06:27.452 "reset": true, 00:06:27.452 "nvme_admin": false, 00:06:27.452 "nvme_io": false, 00:06:27.452 "nvme_io_md": false, 00:06:27.452 "write_zeroes": true, 00:06:27.452 "zcopy": false, 00:06:27.452 "get_zone_info": false, 00:06:27.452 "zone_management": false, 00:06:27.452 "zone_append": false, 00:06:27.452 "compare": false, 00:06:27.452 "compare_and_write": false, 00:06:27.452 "abort": false, 00:06:27.452 "seek_hole": false, 00:06:27.452 "seek_data": false, 00:06:27.453 "copy": false, 00:06:27.453 "nvme_iov_md": false 00:06:27.453 }, 00:06:27.453 "memory_domains": [ 00:06:27.453 { 00:06:27.453 "dma_device_id": "system", 00:06:27.453 "dma_device_type": 1 00:06:27.453 }, 00:06:27.453 { 00:06:27.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.453 "dma_device_type": 2 00:06:27.453 }, 00:06:27.453 { 00:06:27.453 "dma_device_id": "system", 00:06:27.453 "dma_device_type": 1 00:06:27.453 }, 00:06:27.453 { 00:06:27.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.453 "dma_device_type": 2 00:06:27.453 } 00:06:27.453 ], 00:06:27.453 "driver_specific": { 00:06:27.453 "raid": { 00:06:27.453 "uuid": "039d6e9c-a0be-498c-93ca-44b6e532f99e", 00:06:27.453 "strip_size_kb": 64, 00:06:27.453 "state": "online", 00:06:27.453 "raid_level": "raid0", 00:06:27.453 "superblock": true, 00:06:27.453 "num_base_bdevs": 2, 00:06:27.453 "num_base_bdevs_discovered": 2, 00:06:27.453 "num_base_bdevs_operational": 2, 00:06:27.453 "base_bdevs_list": [ 00:06:27.453 { 00:06:27.453 "name": "pt1", 00:06:27.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:27.453 "is_configured": true, 00:06:27.453 "data_offset": 2048, 00:06:27.453 "data_size": 63488 00:06:27.453 }, 00:06:27.453 { 00:06:27.453 "name": "pt2", 00:06:27.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:27.453 "is_configured": true, 00:06:27.453 "data_offset": 2048, 00:06:27.453 "data_size": 63488 00:06:27.453 } 00:06:27.453 ] 00:06:27.453 } 00:06:27.453 } 00:06:27.453 }' 00:06:27.453 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:27.714 pt2' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:27.714 [2024-10-30 09:41:06.174906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 039d6e9c-a0be-498c-93ca-44b6e532f99e '!=' 039d6e9c-a0be-498c-93ca-44b6e532f99e ']' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60038 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60038 ']' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60038 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60038 00:06:27.714 killing process with pid 60038 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.714 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.715 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60038' 00:06:27.715 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60038 00:06:27.715 [2024-10-30 09:41:06.229321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.715 09:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60038 00:06:27.715 [2024-10-30 09:41:06.229400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.715 [2024-10-30 09:41:06.229444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.715 [2024-10-30 09:41:06.229455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:27.976 [2024-10-30 09:41:06.355215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:28.610 09:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:28.610 00:06:28.610 real 0m3.318s 00:06:28.610 user 0m4.655s 00:06:28.610 sys 0m0.529s 00:06:28.610 09:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.610 ************************************ 00:06:28.610 END TEST raid_superblock_test 00:06:28.610 ************************************ 00:06:28.610 09:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.610 09:41:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:28.610 09:41:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:28.610 09:41:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.610 09:41:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.610 ************************************ 00:06:28.610 START TEST raid_read_error_test 00:06:28.610 ************************************ 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:28.610 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.myKhlIuW3Q 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60233 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60233 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 60233 ']' 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:28.611 09:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.611 [2024-10-30 09:41:07.183361] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:28.611 [2024-10-30 09:41:07.183477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:06:28.872 [2024-10-30 09:41:07.338252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.872 [2024-10-30 09:41:07.438107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.133 [2024-10-30 09:41:07.590245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.133 [2024-10-30 09:41:07.590288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.703 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.703 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:29.703 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:29.703 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:29.703 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 BaseBdev1_malloc 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 true 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 [2024-10-30 09:41:08.074374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:29.704 [2024-10-30 09:41:08.074420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.704 [2024-10-30 09:41:08.074438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:29.704 [2024-10-30 09:41:08.074449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.704 [2024-10-30 09:41:08.076584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.704 [2024-10-30 09:41:08.076615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:29.704 BaseBdev1 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 BaseBdev2_malloc 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 true 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 [2024-10-30 09:41:08.118330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:29.704 [2024-10-30 09:41:08.118373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.704 [2024-10-30 09:41:08.118389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:29.704 [2024-10-30 09:41:08.118400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.704 [2024-10-30 09:41:08.120491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.704 [2024-10-30 09:41:08.120520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:29.704 BaseBdev2 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 [2024-10-30 09:41:08.126398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:29.704 [2024-10-30 09:41:08.128237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:29.704 [2024-10-30 09:41:08.128419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:29.704 [2024-10-30 09:41:08.128434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:29.704 [2024-10-30 09:41:08.128669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:29.704 [2024-10-30 09:41:08.128818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:29.704 [2024-10-30 09:41:08.128835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:29.704 [2024-10-30 09:41:08.128974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:29.704 "name": "raid_bdev1", 00:06:29.704 "uuid": "cf06ca4a-b8fe-49a8-b10f-c0c6ff6fd842", 00:06:29.704 "strip_size_kb": 64, 00:06:29.704 "state": "online", 00:06:29.704 "raid_level": "raid0", 00:06:29.704 "superblock": true, 00:06:29.704 "num_base_bdevs": 2, 00:06:29.704 "num_base_bdevs_discovered": 2, 00:06:29.704 "num_base_bdevs_operational": 2, 00:06:29.704 "base_bdevs_list": [ 00:06:29.704 { 00:06:29.704 "name": "BaseBdev1", 00:06:29.704 "uuid": "556e622d-61d1-5239-8f68-969e3e6e63eb", 00:06:29.704 "is_configured": true, 00:06:29.704 "data_offset": 2048, 00:06:29.704 "data_size": 63488 00:06:29.704 }, 00:06:29.704 { 00:06:29.704 "name": "BaseBdev2", 00:06:29.704 "uuid": "db6b12d1-f18c-5b0c-bebd-0f07799e6bfe", 00:06:29.704 "is_configured": true, 00:06:29.704 "data_offset": 2048, 00:06:29.704 "data_size": 63488 00:06:29.704 } 00:06:29.704 ] 00:06:29.704 }' 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:29.704 09:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.963 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:29.963 09:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:29.963 [2024-10-30 09:41:08.555483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:30.907 "name": "raid_bdev1", 00:06:30.907 "uuid": "cf06ca4a-b8fe-49a8-b10f-c0c6ff6fd842", 00:06:30.907 "strip_size_kb": 64, 00:06:30.907 "state": "online", 00:06:30.907 "raid_level": "raid0", 00:06:30.907 "superblock": true, 00:06:30.907 "num_base_bdevs": 2, 00:06:30.907 "num_base_bdevs_discovered": 2, 00:06:30.907 "num_base_bdevs_operational": 2, 00:06:30.907 "base_bdevs_list": [ 00:06:30.907 { 00:06:30.907 "name": "BaseBdev1", 00:06:30.907 "uuid": "556e622d-61d1-5239-8f68-969e3e6e63eb", 00:06:30.907 "is_configured": true, 00:06:30.907 "data_offset": 2048, 00:06:30.907 "data_size": 63488 00:06:30.907 }, 00:06:30.907 { 00:06:30.907 "name": "BaseBdev2", 00:06:30.907 "uuid": "db6b12d1-f18c-5b0c-bebd-0f07799e6bfe", 00:06:30.907 "is_configured": true, 00:06:30.907 "data_offset": 2048, 00:06:30.907 "data_size": 63488 00:06:30.907 } 00:06:30.907 ] 00:06:30.907 }' 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:30.907 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.174 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:31.174 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.174 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.435 [2024-10-30 09:41:09.793945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:31.435 [2024-10-30 09:41:09.793984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:31.435 [2024-10-30 09:41:09.797148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.435 [2024-10-30 09:41:09.797205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.435 [2024-10-30 09:41:09.797250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.435 [2024-10-30 09:41:09.797267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:31.435 { 00:06:31.435 "results": [ 00:06:31.435 { 00:06:31.435 "job": "raid_bdev1", 00:06:31.435 "core_mask": "0x1", 00:06:31.435 "workload": "randrw", 00:06:31.435 "percentage": 50, 00:06:31.435 "status": "finished", 00:06:31.435 "queue_depth": 1, 00:06:31.435 "io_size": 131072, 00:06:31.435 "runtime": 1.236469, 00:06:31.435 "iops": 14560.817942059202, 00:06:31.435 "mibps": 1820.1022427574003, 00:06:31.435 "io_failed": 1, 00:06:31.435 "io_timeout": 0, 00:06:31.435 "avg_latency_us": 93.86502791959498, 00:06:31.435 "min_latency_us": 33.08307692307692, 00:06:31.435 "max_latency_us": 1840.0492307692307 00:06:31.435 } 00:06:31.435 ], 00:06:31.435 "core_count": 1 00:06:31.435 } 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60233 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 60233 ']' 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 60233 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60233 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.435 killing process with pid 60233 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60233' 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 60233 00:06:31.435 09:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 60233 00:06:31.435 [2024-10-30 09:41:09.824675] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.435 [2024-10-30 09:41:09.913769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.myKhlIuW3Q 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:06:32.380 00:06:32.380 real 0m3.545s 00:06:32.380 user 0m4.290s 00:06:32.380 sys 0m0.351s 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.380 ************************************ 00:06:32.380 END TEST raid_read_error_test 00:06:32.380 09:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 ************************************ 00:06:32.380 09:41:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:32.380 09:41:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:32.380 09:41:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.380 09:41:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 ************************************ 00:06:32.380 START TEST raid_write_error_test 00:06:32.380 ************************************ 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BBgPgIdjB5 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60368 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60368 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 60368 ']' 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.380 09:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 [2024-10-30 09:41:10.778605] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:32.380 [2024-10-30 09:41:10.778730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60368 ] 00:06:32.380 [2024-10-30 09:41:10.934150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.640 [2024-10-30 09:41:11.036813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.640 [2024-10-30 09:41:11.184138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.640 [2024-10-30 09:41:11.184202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 BaseBdev1_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 true 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 [2024-10-30 09:41:11.681321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:33.209 [2024-10-30 09:41:11.681371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.209 [2024-10-30 09:41:11.681390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:33.209 [2024-10-30 09:41:11.681401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.209 [2024-10-30 09:41:11.683531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.209 [2024-10-30 09:41:11.683567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:33.209 BaseBdev1 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 BaseBdev2_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 true 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 [2024-10-30 09:41:11.725429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:33.209 [2024-10-30 09:41:11.725473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.209 [2024-10-30 09:41:11.725489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:33.209 [2024-10-30 09:41:11.725499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.209 [2024-10-30 09:41:11.727901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.209 [2024-10-30 09:41:11.727950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:33.209 BaseBdev2 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.209 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 [2024-10-30 09:41:11.733506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:33.210 [2024-10-30 09:41:11.735370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:33.210 [2024-10-30 09:41:11.735550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:33.210 [2024-10-30 09:41:11.735566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:33.210 [2024-10-30 09:41:11.735805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.210 [2024-10-30 09:41:11.735960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:33.210 [2024-10-30 09:41:11.735971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:33.210 [2024-10-30 09:41:11.736136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.210 "name": "raid_bdev1", 00:06:33.210 "uuid": "0f1efb14-9eff-4b41-b2f5-4e14893eca66", 00:06:33.210 "strip_size_kb": 64, 00:06:33.210 "state": "online", 00:06:33.210 "raid_level": "raid0", 00:06:33.210 "superblock": true, 00:06:33.210 "num_base_bdevs": 2, 00:06:33.210 "num_base_bdevs_discovered": 2, 00:06:33.210 "num_base_bdevs_operational": 2, 00:06:33.210 "base_bdevs_list": [ 00:06:33.210 { 00:06:33.210 "name": "BaseBdev1", 00:06:33.210 "uuid": "5ea31b5b-9be8-5106-88b9-fe126e3e9d70", 00:06:33.210 "is_configured": true, 00:06:33.210 "data_offset": 2048, 00:06:33.210 "data_size": 63488 00:06:33.210 }, 00:06:33.210 { 00:06:33.210 "name": "BaseBdev2", 00:06:33.210 "uuid": "fbe04a8b-b2fd-5670-874f-1c293386ab66", 00:06:33.210 "is_configured": true, 00:06:33.210 "data_offset": 2048, 00:06:33.210 "data_size": 63488 00:06:33.210 } 00:06:33.210 ] 00:06:33.210 }' 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.210 09:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.469 09:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:33.469 09:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:33.729 [2024-10-30 09:41:12.138535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.680 "name": "raid_bdev1", 00:06:34.680 "uuid": "0f1efb14-9eff-4b41-b2f5-4e14893eca66", 00:06:34.680 "strip_size_kb": 64, 00:06:34.680 "state": "online", 00:06:34.680 "raid_level": "raid0", 00:06:34.680 "superblock": true, 00:06:34.680 "num_base_bdevs": 2, 00:06:34.680 "num_base_bdevs_discovered": 2, 00:06:34.680 "num_base_bdevs_operational": 2, 00:06:34.680 "base_bdevs_list": [ 00:06:34.680 { 00:06:34.680 "name": "BaseBdev1", 00:06:34.680 "uuid": "5ea31b5b-9be8-5106-88b9-fe126e3e9d70", 00:06:34.680 "is_configured": true, 00:06:34.680 "data_offset": 2048, 00:06:34.680 "data_size": 63488 00:06:34.680 }, 00:06:34.680 { 00:06:34.680 "name": "BaseBdev2", 00:06:34.680 "uuid": "fbe04a8b-b2fd-5670-874f-1c293386ab66", 00:06:34.680 "is_configured": true, 00:06:34.680 "data_offset": 2048, 00:06:34.680 "data_size": 63488 00:06:34.680 } 00:06:34.680 ] 00:06:34.680 }' 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.680 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.942 [2024-10-30 09:41:13.367989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:34.942 [2024-10-30 09:41:13.368027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.942 [2024-10-30 09:41:13.371101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.942 [2024-10-30 09:41:13.371145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.942 [2024-10-30 09:41:13.371177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.942 [2024-10-30 09:41:13.371188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:34.942 { 00:06:34.942 "results": [ 00:06:34.942 { 00:06:34.942 "job": "raid_bdev1", 00:06:34.942 "core_mask": "0x1", 00:06:34.942 "workload": "randrw", 00:06:34.942 "percentage": 50, 00:06:34.942 "status": "finished", 00:06:34.942 "queue_depth": 1, 00:06:34.942 "io_size": 131072, 00:06:34.942 "runtime": 1.227613, 00:06:34.942 "iops": 14530.63791276241, 00:06:34.942 "mibps": 1816.3297390953012, 00:06:34.942 "io_failed": 1, 00:06:34.942 "io_timeout": 0, 00:06:34.942 "avg_latency_us": 94.06519406486221, 00:06:34.942 "min_latency_us": 33.28, 00:06:34.942 "max_latency_us": 1739.2246153846154 00:06:34.942 } 00:06:34.942 ], 00:06:34.942 "core_count": 1 00:06:34.942 } 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60368 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 60368 ']' 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 60368 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60368 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.942 killing process with pid 60368 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60368' 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 60368 00:06:34.942 [2024-10-30 09:41:13.398964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.942 09:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 60368 00:06:34.942 [2024-10-30 09:41:13.485263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.885 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BBgPgIdjB5 00:06:35.885 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:35.885 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:35.885 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:06:35.885 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:35.886 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:35.886 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:35.886 09:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:06:35.886 00:06:35.886 real 0m3.537s 00:06:35.886 user 0m4.227s 00:06:35.886 sys 0m0.372s 00:06:35.886 09:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.886 ************************************ 00:06:35.886 END TEST raid_write_error_test 00:06:35.886 ************************************ 00:06:35.886 09:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.886 09:41:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:35.886 09:41:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:35.886 09:41:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:35.886 09:41:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.886 09:41:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.886 ************************************ 00:06:35.886 START TEST raid_state_function_test 00:06:35.886 ************************************ 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:35.886 Process raid pid: 60500 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60500 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60500' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60500 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60500 ']' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.886 09:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.886 [2024-10-30 09:41:14.381327] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:35.886 [2024-10-30 09:41:14.381446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.148 [2024-10-30 09:41:14.545512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.148 [2024-10-30 09:41:14.661715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.409 [2024-10-30 09:41:14.799748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.409 [2024-10-30 09:41:14.799787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.669 [2024-10-30 09:41:15.224794] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.669 [2024-10-30 09:41:15.224841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.669 [2024-10-30 09:41:15.224851] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.669 [2024-10-30 09:41:15.224862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:36.669 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.670 "name": "Existed_Raid", 00:06:36.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.670 "strip_size_kb": 64, 00:06:36.670 "state": "configuring", 00:06:36.670 "raid_level": "concat", 00:06:36.670 "superblock": false, 00:06:36.670 "num_base_bdevs": 2, 00:06:36.670 "num_base_bdevs_discovered": 0, 00:06:36.670 "num_base_bdevs_operational": 2, 00:06:36.670 "base_bdevs_list": [ 00:06:36.670 { 00:06:36.670 "name": "BaseBdev1", 00:06:36.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.670 "is_configured": false, 00:06:36.670 "data_offset": 0, 00:06:36.670 "data_size": 0 00:06:36.670 }, 00:06:36.670 { 00:06:36.670 "name": "BaseBdev2", 00:06:36.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.670 "is_configured": false, 00:06:36.670 "data_offset": 0, 00:06:36.670 "data_size": 0 00:06:36.670 } 00:06:36.670 ] 00:06:36.670 }' 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.670 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 [2024-10-30 09:41:15.564842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.337 [2024-10-30 09:41:15.564879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 [2024-10-30 09:41:15.572841] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.337 [2024-10-30 09:41:15.572878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.337 [2024-10-30 09:41:15.572888] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.337 [2024-10-30 09:41:15.572900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 [2024-10-30 09:41:15.605490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.337 BaseBdev1 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 [ 00:06:37.337 { 00:06:37.337 "name": "BaseBdev1", 00:06:37.337 "aliases": [ 00:06:37.337 "e9d34344-e85f-4769-b6e9-d94a37fa4c1e" 00:06:37.337 ], 00:06:37.337 "product_name": "Malloc disk", 00:06:37.337 "block_size": 512, 00:06:37.337 "num_blocks": 65536, 00:06:37.337 "uuid": "e9d34344-e85f-4769-b6e9-d94a37fa4c1e", 00:06:37.337 "assigned_rate_limits": { 00:06:37.337 "rw_ios_per_sec": 0, 00:06:37.337 "rw_mbytes_per_sec": 0, 00:06:37.338 "r_mbytes_per_sec": 0, 00:06:37.338 "w_mbytes_per_sec": 0 00:06:37.338 }, 00:06:37.338 "claimed": true, 00:06:37.338 "claim_type": "exclusive_write", 00:06:37.338 "zoned": false, 00:06:37.338 "supported_io_types": { 00:06:37.338 "read": true, 00:06:37.338 "write": true, 00:06:37.338 "unmap": true, 00:06:37.338 "flush": true, 00:06:37.338 "reset": true, 00:06:37.338 "nvme_admin": false, 00:06:37.338 "nvme_io": false, 00:06:37.338 "nvme_io_md": false, 00:06:37.338 "write_zeroes": true, 00:06:37.338 "zcopy": true, 00:06:37.338 "get_zone_info": false, 00:06:37.338 "zone_management": false, 00:06:37.338 "zone_append": false, 00:06:37.338 "compare": false, 00:06:37.338 "compare_and_write": false, 00:06:37.338 "abort": true, 00:06:37.338 "seek_hole": false, 00:06:37.338 "seek_data": false, 00:06:37.338 "copy": true, 00:06:37.338 "nvme_iov_md": false 00:06:37.338 }, 00:06:37.338 "memory_domains": [ 00:06:37.338 { 00:06:37.338 "dma_device_id": "system", 00:06:37.338 "dma_device_type": 1 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.338 "dma_device_type": 2 00:06:37.338 } 00:06:37.338 ], 00:06:37.338 "driver_specific": {} 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.338 "name": "Existed_Raid", 00:06:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.338 "strip_size_kb": 64, 00:06:37.338 "state": "configuring", 00:06:37.338 "raid_level": "concat", 00:06:37.338 "superblock": false, 00:06:37.338 "num_base_bdevs": 2, 00:06:37.338 "num_base_bdevs_discovered": 1, 00:06:37.338 "num_base_bdevs_operational": 2, 00:06:37.338 "base_bdevs_list": [ 00:06:37.338 { 00:06:37.338 "name": "BaseBdev1", 00:06:37.338 "uuid": "e9d34344-e85f-4769-b6e9-d94a37fa4c1e", 00:06:37.338 "is_configured": true, 00:06:37.338 "data_offset": 0, 00:06:37.338 "data_size": 65536 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "name": "BaseBdev2", 00:06:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.338 "is_configured": false, 00:06:37.338 "data_offset": 0, 00:06:37.338 "data_size": 0 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }' 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.338 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.598 [2024-10-30 09:41:15.957625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.598 [2024-10-30 09:41:15.957675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.598 [2024-10-30 09:41:15.965680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.598 [2024-10-30 09:41:15.967606] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.598 [2024-10-30 09:41:15.967645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.598 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.599 09:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.599 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.599 "name": "Existed_Raid", 00:06:37.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.599 "strip_size_kb": 64, 00:06:37.599 "state": "configuring", 00:06:37.599 "raid_level": "concat", 00:06:37.599 "superblock": false, 00:06:37.599 "num_base_bdevs": 2, 00:06:37.599 "num_base_bdevs_discovered": 1, 00:06:37.599 "num_base_bdevs_operational": 2, 00:06:37.599 "base_bdevs_list": [ 00:06:37.599 { 00:06:37.599 "name": "BaseBdev1", 00:06:37.599 "uuid": "e9d34344-e85f-4769-b6e9-d94a37fa4c1e", 00:06:37.599 "is_configured": true, 00:06:37.599 "data_offset": 0, 00:06:37.599 "data_size": 65536 00:06:37.599 }, 00:06:37.599 { 00:06:37.599 "name": "BaseBdev2", 00:06:37.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.599 "is_configured": false, 00:06:37.599 "data_offset": 0, 00:06:37.599 "data_size": 0 00:06:37.599 } 00:06:37.599 ] 00:06:37.599 }' 00:06:37.599 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.599 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 [2024-10-30 09:41:16.296392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:37.858 [2024-10-30 09:41:16.296433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:37.858 [2024-10-30 09:41:16.296441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:37.858 [2024-10-30 09:41:16.296701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.858 [2024-10-30 09:41:16.296838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:37.858 [2024-10-30 09:41:16.296851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:37.858 [2024-10-30 09:41:16.297087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.858 BaseBdev2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 [ 00:06:37.858 { 00:06:37.858 "name": "BaseBdev2", 00:06:37.858 "aliases": [ 00:06:37.858 "d509fb54-4498-4a8a-b947-9d5502533b9f" 00:06:37.858 ], 00:06:37.858 "product_name": "Malloc disk", 00:06:37.858 "block_size": 512, 00:06:37.858 "num_blocks": 65536, 00:06:37.858 "uuid": "d509fb54-4498-4a8a-b947-9d5502533b9f", 00:06:37.858 "assigned_rate_limits": { 00:06:37.858 "rw_ios_per_sec": 0, 00:06:37.858 "rw_mbytes_per_sec": 0, 00:06:37.858 "r_mbytes_per_sec": 0, 00:06:37.858 "w_mbytes_per_sec": 0 00:06:37.858 }, 00:06:37.858 "claimed": true, 00:06:37.858 "claim_type": "exclusive_write", 00:06:37.858 "zoned": false, 00:06:37.858 "supported_io_types": { 00:06:37.858 "read": true, 00:06:37.858 "write": true, 00:06:37.858 "unmap": true, 00:06:37.858 "flush": true, 00:06:37.858 "reset": true, 00:06:37.858 "nvme_admin": false, 00:06:37.858 "nvme_io": false, 00:06:37.858 "nvme_io_md": false, 00:06:37.858 "write_zeroes": true, 00:06:37.858 "zcopy": true, 00:06:37.858 "get_zone_info": false, 00:06:37.858 "zone_management": false, 00:06:37.858 "zone_append": false, 00:06:37.858 "compare": false, 00:06:37.858 "compare_and_write": false, 00:06:37.858 "abort": true, 00:06:37.858 "seek_hole": false, 00:06:37.858 "seek_data": false, 00:06:37.858 "copy": true, 00:06:37.858 "nvme_iov_md": false 00:06:37.858 }, 00:06:37.858 "memory_domains": [ 00:06:37.858 { 00:06:37.858 "dma_device_id": "system", 00:06:37.858 "dma_device_type": 1 00:06:37.858 }, 00:06:37.858 { 00:06:37.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.858 "dma_device_type": 2 00:06:37.858 } 00:06:37.858 ], 00:06:37.858 "driver_specific": {} 00:06:37.858 } 00:06:37.858 ] 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.858 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.859 "name": "Existed_Raid", 00:06:37.859 "uuid": "b4e4f28a-948f-4a91-8557-af73e0e1c6a9", 00:06:37.859 "strip_size_kb": 64, 00:06:37.859 "state": "online", 00:06:37.859 "raid_level": "concat", 00:06:37.859 "superblock": false, 00:06:37.859 "num_base_bdevs": 2, 00:06:37.859 "num_base_bdevs_discovered": 2, 00:06:37.859 "num_base_bdevs_operational": 2, 00:06:37.859 "base_bdevs_list": [ 00:06:37.859 { 00:06:37.859 "name": "BaseBdev1", 00:06:37.859 "uuid": "e9d34344-e85f-4769-b6e9-d94a37fa4c1e", 00:06:37.859 "is_configured": true, 00:06:37.859 "data_offset": 0, 00:06:37.859 "data_size": 65536 00:06:37.859 }, 00:06:37.859 { 00:06:37.859 "name": "BaseBdev2", 00:06:37.859 "uuid": "d509fb54-4498-4a8a-b947-9d5502533b9f", 00:06:37.859 "is_configured": true, 00:06:37.859 "data_offset": 0, 00:06:37.859 "data_size": 65536 00:06:37.859 } 00:06:37.859 ] 00:06:37.859 }' 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.859 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.119 [2024-10-30 09:41:16.656818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.119 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.120 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:38.120 "name": "Existed_Raid", 00:06:38.120 "aliases": [ 00:06:38.120 "b4e4f28a-948f-4a91-8557-af73e0e1c6a9" 00:06:38.120 ], 00:06:38.120 "product_name": "Raid Volume", 00:06:38.120 "block_size": 512, 00:06:38.120 "num_blocks": 131072, 00:06:38.120 "uuid": "b4e4f28a-948f-4a91-8557-af73e0e1c6a9", 00:06:38.120 "assigned_rate_limits": { 00:06:38.120 "rw_ios_per_sec": 0, 00:06:38.120 "rw_mbytes_per_sec": 0, 00:06:38.120 "r_mbytes_per_sec": 0, 00:06:38.120 "w_mbytes_per_sec": 0 00:06:38.120 }, 00:06:38.120 "claimed": false, 00:06:38.120 "zoned": false, 00:06:38.120 "supported_io_types": { 00:06:38.120 "read": true, 00:06:38.120 "write": true, 00:06:38.120 "unmap": true, 00:06:38.120 "flush": true, 00:06:38.120 "reset": true, 00:06:38.120 "nvme_admin": false, 00:06:38.120 "nvme_io": false, 00:06:38.120 "nvme_io_md": false, 00:06:38.120 "write_zeroes": true, 00:06:38.120 "zcopy": false, 00:06:38.120 "get_zone_info": false, 00:06:38.120 "zone_management": false, 00:06:38.120 "zone_append": false, 00:06:38.120 "compare": false, 00:06:38.120 "compare_and_write": false, 00:06:38.120 "abort": false, 00:06:38.120 "seek_hole": false, 00:06:38.120 "seek_data": false, 00:06:38.120 "copy": false, 00:06:38.120 "nvme_iov_md": false 00:06:38.120 }, 00:06:38.120 "memory_domains": [ 00:06:38.120 { 00:06:38.120 "dma_device_id": "system", 00:06:38.120 "dma_device_type": 1 00:06:38.120 }, 00:06:38.120 { 00:06:38.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.120 "dma_device_type": 2 00:06:38.120 }, 00:06:38.120 { 00:06:38.120 "dma_device_id": "system", 00:06:38.120 "dma_device_type": 1 00:06:38.120 }, 00:06:38.120 { 00:06:38.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.120 "dma_device_type": 2 00:06:38.120 } 00:06:38.120 ], 00:06:38.120 "driver_specific": { 00:06:38.120 "raid": { 00:06:38.120 "uuid": "b4e4f28a-948f-4a91-8557-af73e0e1c6a9", 00:06:38.120 "strip_size_kb": 64, 00:06:38.120 "state": "online", 00:06:38.120 "raid_level": "concat", 00:06:38.120 "superblock": false, 00:06:38.120 "num_base_bdevs": 2, 00:06:38.120 "num_base_bdevs_discovered": 2, 00:06:38.120 "num_base_bdevs_operational": 2, 00:06:38.120 "base_bdevs_list": [ 00:06:38.120 { 00:06:38.120 "name": "BaseBdev1", 00:06:38.120 "uuid": "e9d34344-e85f-4769-b6e9-d94a37fa4c1e", 00:06:38.120 "is_configured": true, 00:06:38.120 "data_offset": 0, 00:06:38.120 "data_size": 65536 00:06:38.120 }, 00:06:38.120 { 00:06:38.120 "name": "BaseBdev2", 00:06:38.120 "uuid": "d509fb54-4498-4a8a-b947-9d5502533b9f", 00:06:38.120 "is_configured": true, 00:06:38.120 "data_offset": 0, 00:06:38.120 "data_size": 65536 00:06:38.120 } 00:06:38.120 ] 00:06:38.120 } 00:06:38.120 } 00:06:38.120 }' 00:06:38.120 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:38.120 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:38.120 BaseBdev2' 00:06:38.120 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.380 [2024-10-30 09:41:16.804589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:38.380 [2024-10-30 09:41:16.804619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.380 [2024-10-30 09:41:16.804668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.380 "name": "Existed_Raid", 00:06:38.380 "uuid": "b4e4f28a-948f-4a91-8557-af73e0e1c6a9", 00:06:38.380 "strip_size_kb": 64, 00:06:38.380 "state": "offline", 00:06:38.380 "raid_level": "concat", 00:06:38.380 "superblock": false, 00:06:38.380 "num_base_bdevs": 2, 00:06:38.380 "num_base_bdevs_discovered": 1, 00:06:38.380 "num_base_bdevs_operational": 1, 00:06:38.380 "base_bdevs_list": [ 00:06:38.380 { 00:06:38.380 "name": null, 00:06:38.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.380 "is_configured": false, 00:06:38.380 "data_offset": 0, 00:06:38.380 "data_size": 65536 00:06:38.380 }, 00:06:38.380 { 00:06:38.380 "name": "BaseBdev2", 00:06:38.380 "uuid": "d509fb54-4498-4a8a-b947-9d5502533b9f", 00:06:38.380 "is_configured": true, 00:06:38.380 "data_offset": 0, 00:06:38.380 "data_size": 65536 00:06:38.380 } 00:06:38.380 ] 00:06:38.380 }' 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.380 09:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.641 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 [2024-10-30 09:41:17.207991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:38.641 [2024-10-30 09:41:17.208173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60500 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60500 ']' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60500 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60500 00:06:38.929 killing process with pid 60500 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60500' 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60500 00:06:38.929 [2024-10-30 09:41:17.333259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.929 09:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60500 00:06:38.929 [2024-10-30 09:41:17.343686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.500 ************************************ 00:06:39.500 END TEST raid_state_function_test 00:06:39.500 ************************************ 00:06:39.500 09:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:39.500 00:06:39.500 real 0m3.739s 00:06:39.500 user 0m5.391s 00:06:39.500 sys 0m0.550s 00:06:39.500 09:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.500 09:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.500 09:41:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:39.500 09:41:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:39.500 09:41:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.500 09:41:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.762 ************************************ 00:06:39.762 START TEST raid_state_function_test_sb 00:06:39.762 ************************************ 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:39.762 Process raid pid: 60737 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60737 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60737' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60737 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60737 ']' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.762 09:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.762 [2024-10-30 09:41:18.189502] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:39.762 [2024-10-30 09:41:18.189749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.762 [2024-10-30 09:41:18.352106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.023 [2024-10-30 09:41:18.455228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.023 [2024-10-30 09:41:18.593585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.023 [2024-10-30 09:41:18.593615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.596 [2024-10-30 09:41:19.086802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:40.596 [2024-10-30 09:41:19.086853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:40.596 [2024-10-30 09:41:19.086863] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:40.596 [2024-10-30 09:41:19.086874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.596 "name": "Existed_Raid", 00:06:40.596 "uuid": "7b85ab1f-06a7-414c-8db4-823a41ed4cba", 00:06:40.596 "strip_size_kb": 64, 00:06:40.596 "state": "configuring", 00:06:40.596 "raid_level": "concat", 00:06:40.596 "superblock": true, 00:06:40.596 "num_base_bdevs": 2, 00:06:40.596 "num_base_bdevs_discovered": 0, 00:06:40.596 "num_base_bdevs_operational": 2, 00:06:40.596 "base_bdevs_list": [ 00:06:40.596 { 00:06:40.596 "name": "BaseBdev1", 00:06:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.596 "is_configured": false, 00:06:40.596 "data_offset": 0, 00:06:40.596 "data_size": 0 00:06:40.596 }, 00:06:40.596 { 00:06:40.596 "name": "BaseBdev2", 00:06:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.596 "is_configured": false, 00:06:40.596 "data_offset": 0, 00:06:40.596 "data_size": 0 00:06:40.596 } 00:06:40.596 ] 00:06:40.596 }' 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.596 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 [2024-10-30 09:41:19.398819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:40.857 [2024-10-30 09:41:19.398850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 [2024-10-30 09:41:19.406825] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:40.857 [2024-10-30 09:41:19.406862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:40.857 [2024-10-30 09:41:19.406870] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:40.857 [2024-10-30 09:41:19.406882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 [2024-10-30 09:41:19.439265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:40.857 BaseBdev1 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.857 [ 00:06:40.857 { 00:06:40.857 "name": "BaseBdev1", 00:06:40.857 "aliases": [ 00:06:40.857 "f5648def-ed51-4103-8071-3306afdc923e" 00:06:40.857 ], 00:06:40.857 "product_name": "Malloc disk", 00:06:40.857 "block_size": 512, 00:06:40.857 "num_blocks": 65536, 00:06:40.857 "uuid": "f5648def-ed51-4103-8071-3306afdc923e", 00:06:40.857 "assigned_rate_limits": { 00:06:40.857 "rw_ios_per_sec": 0, 00:06:40.857 "rw_mbytes_per_sec": 0, 00:06:40.857 "r_mbytes_per_sec": 0, 00:06:40.857 "w_mbytes_per_sec": 0 00:06:40.857 }, 00:06:40.857 "claimed": true, 00:06:40.857 "claim_type": "exclusive_write", 00:06:40.857 "zoned": false, 00:06:40.857 "supported_io_types": { 00:06:40.857 "read": true, 00:06:40.857 "write": true, 00:06:40.857 "unmap": true, 00:06:40.857 "flush": true, 00:06:40.857 "reset": true, 00:06:40.857 "nvme_admin": false, 00:06:40.857 "nvme_io": false, 00:06:40.857 "nvme_io_md": false, 00:06:40.857 "write_zeroes": true, 00:06:40.857 "zcopy": true, 00:06:40.857 "get_zone_info": false, 00:06:40.857 "zone_management": false, 00:06:40.857 "zone_append": false, 00:06:40.857 "compare": false, 00:06:40.857 "compare_and_write": false, 00:06:40.857 "abort": true, 00:06:40.857 "seek_hole": false, 00:06:40.857 "seek_data": false, 00:06:40.857 "copy": true, 00:06:40.857 "nvme_iov_md": false 00:06:40.857 }, 00:06:40.857 "memory_domains": [ 00:06:40.857 { 00:06:40.857 "dma_device_id": "system", 00:06:40.857 "dma_device_type": 1 00:06:40.857 }, 00:06:40.857 { 00:06:40.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.857 "dma_device_type": 2 00:06:40.857 } 00:06:40.857 ], 00:06:40.857 "driver_specific": {} 00:06:40.857 } 00:06:40.857 ] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.857 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.858 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.858 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.858 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.858 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.858 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:41.120 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.120 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.120 "name": "Existed_Raid", 00:06:41.121 "uuid": "6579bf3c-3cf8-48b0-a0d0-efbf58a2f377", 00:06:41.121 "strip_size_kb": 64, 00:06:41.121 "state": "configuring", 00:06:41.121 "raid_level": "concat", 00:06:41.121 "superblock": true, 00:06:41.121 "num_base_bdevs": 2, 00:06:41.121 "num_base_bdevs_discovered": 1, 00:06:41.121 "num_base_bdevs_operational": 2, 00:06:41.121 "base_bdevs_list": [ 00:06:41.121 { 00:06:41.121 "name": "BaseBdev1", 00:06:41.121 "uuid": "f5648def-ed51-4103-8071-3306afdc923e", 00:06:41.121 "is_configured": true, 00:06:41.121 "data_offset": 2048, 00:06:41.121 "data_size": 63488 00:06:41.121 }, 00:06:41.121 { 00:06:41.121 "name": "BaseBdev2", 00:06:41.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:41.121 "is_configured": false, 00:06:41.121 "data_offset": 0, 00:06:41.121 "data_size": 0 00:06:41.121 } 00:06:41.121 ] 00:06:41.121 }' 00:06:41.121 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.121 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.383 [2024-10-30 09:41:19.779379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:41.383 [2024-10-30 09:41:19.779523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.383 [2024-10-30 09:41:19.787443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.383 [2024-10-30 09:41:19.789303] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:41.383 [2024-10-30 09:41:19.789336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.383 "name": "Existed_Raid", 00:06:41.383 "uuid": "9b45d671-459d-40fa-b7f1-8c8d4d98d57b", 00:06:41.383 "strip_size_kb": 64, 00:06:41.383 "state": "configuring", 00:06:41.383 "raid_level": "concat", 00:06:41.383 "superblock": true, 00:06:41.383 "num_base_bdevs": 2, 00:06:41.383 "num_base_bdevs_discovered": 1, 00:06:41.383 "num_base_bdevs_operational": 2, 00:06:41.383 "base_bdevs_list": [ 00:06:41.383 { 00:06:41.383 "name": "BaseBdev1", 00:06:41.383 "uuid": "f5648def-ed51-4103-8071-3306afdc923e", 00:06:41.383 "is_configured": true, 00:06:41.383 "data_offset": 2048, 00:06:41.383 "data_size": 63488 00:06:41.383 }, 00:06:41.383 { 00:06:41.383 "name": "BaseBdev2", 00:06:41.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:41.383 "is_configured": false, 00:06:41.383 "data_offset": 0, 00:06:41.383 "data_size": 0 00:06:41.383 } 00:06:41.383 ] 00:06:41.383 }' 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.383 09:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.643 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:41.643 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.644 [2024-10-30 09:41:20.130192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:41.644 [2024-10-30 09:41:20.130390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:41.644 [2024-10-30 09:41:20.130403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:41.644 [2024-10-30 09:41:20.130663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:41.644 [2024-10-30 09:41:20.130789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:41.644 [2024-10-30 09:41:20.130799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:41.644 BaseBdev2 00:06:41.644 [2024-10-30 09:41:20.130917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.644 [ 00:06:41.644 { 00:06:41.644 "name": "BaseBdev2", 00:06:41.644 "aliases": [ 00:06:41.644 "f7b2bf0d-2b1f-42af-b7f7-c066aa6cc5ce" 00:06:41.644 ], 00:06:41.644 "product_name": "Malloc disk", 00:06:41.644 "block_size": 512, 00:06:41.644 "num_blocks": 65536, 00:06:41.644 "uuid": "f7b2bf0d-2b1f-42af-b7f7-c066aa6cc5ce", 00:06:41.644 "assigned_rate_limits": { 00:06:41.644 "rw_ios_per_sec": 0, 00:06:41.644 "rw_mbytes_per_sec": 0, 00:06:41.644 "r_mbytes_per_sec": 0, 00:06:41.644 "w_mbytes_per_sec": 0 00:06:41.644 }, 00:06:41.644 "claimed": true, 00:06:41.644 "claim_type": "exclusive_write", 00:06:41.644 "zoned": false, 00:06:41.644 "supported_io_types": { 00:06:41.644 "read": true, 00:06:41.644 "write": true, 00:06:41.644 "unmap": true, 00:06:41.644 "flush": true, 00:06:41.644 "reset": true, 00:06:41.644 "nvme_admin": false, 00:06:41.644 "nvme_io": false, 00:06:41.644 "nvme_io_md": false, 00:06:41.644 "write_zeroes": true, 00:06:41.644 "zcopy": true, 00:06:41.644 "get_zone_info": false, 00:06:41.644 "zone_management": false, 00:06:41.644 "zone_append": false, 00:06:41.644 "compare": false, 00:06:41.644 "compare_and_write": false, 00:06:41.644 "abort": true, 00:06:41.644 "seek_hole": false, 00:06:41.644 "seek_data": false, 00:06:41.644 "copy": true, 00:06:41.644 "nvme_iov_md": false 00:06:41.644 }, 00:06:41.644 "memory_domains": [ 00:06:41.644 { 00:06:41.644 "dma_device_id": "system", 00:06:41.644 "dma_device_type": 1 00:06:41.644 }, 00:06:41.644 { 00:06:41.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.644 "dma_device_type": 2 00:06:41.644 } 00:06:41.644 ], 00:06:41.644 "driver_specific": {} 00:06:41.644 } 00:06:41.644 ] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.644 "name": "Existed_Raid", 00:06:41.644 "uuid": "9b45d671-459d-40fa-b7f1-8c8d4d98d57b", 00:06:41.644 "strip_size_kb": 64, 00:06:41.644 "state": "online", 00:06:41.644 "raid_level": "concat", 00:06:41.644 "superblock": true, 00:06:41.644 "num_base_bdevs": 2, 00:06:41.644 "num_base_bdevs_discovered": 2, 00:06:41.644 "num_base_bdevs_operational": 2, 00:06:41.644 "base_bdevs_list": [ 00:06:41.644 { 00:06:41.644 "name": "BaseBdev1", 00:06:41.644 "uuid": "f5648def-ed51-4103-8071-3306afdc923e", 00:06:41.644 "is_configured": true, 00:06:41.644 "data_offset": 2048, 00:06:41.644 "data_size": 63488 00:06:41.644 }, 00:06:41.644 { 00:06:41.644 "name": "BaseBdev2", 00:06:41.644 "uuid": "f7b2bf0d-2b1f-42af-b7f7-c066aa6cc5ce", 00:06:41.644 "is_configured": true, 00:06:41.644 "data_offset": 2048, 00:06:41.644 "data_size": 63488 00:06:41.644 } 00:06:41.644 ] 00:06:41.644 }' 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.644 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:41.904 [2024-10-30 09:41:20.482605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:41.904 "name": "Existed_Raid", 00:06:41.904 "aliases": [ 00:06:41.904 "9b45d671-459d-40fa-b7f1-8c8d4d98d57b" 00:06:41.904 ], 00:06:41.904 "product_name": "Raid Volume", 00:06:41.904 "block_size": 512, 00:06:41.904 "num_blocks": 126976, 00:06:41.904 "uuid": "9b45d671-459d-40fa-b7f1-8c8d4d98d57b", 00:06:41.904 "assigned_rate_limits": { 00:06:41.904 "rw_ios_per_sec": 0, 00:06:41.904 "rw_mbytes_per_sec": 0, 00:06:41.904 "r_mbytes_per_sec": 0, 00:06:41.904 "w_mbytes_per_sec": 0 00:06:41.904 }, 00:06:41.904 "claimed": false, 00:06:41.904 "zoned": false, 00:06:41.904 "supported_io_types": { 00:06:41.904 "read": true, 00:06:41.904 "write": true, 00:06:41.904 "unmap": true, 00:06:41.904 "flush": true, 00:06:41.904 "reset": true, 00:06:41.904 "nvme_admin": false, 00:06:41.904 "nvme_io": false, 00:06:41.904 "nvme_io_md": false, 00:06:41.904 "write_zeroes": true, 00:06:41.904 "zcopy": false, 00:06:41.904 "get_zone_info": false, 00:06:41.904 "zone_management": false, 00:06:41.904 "zone_append": false, 00:06:41.904 "compare": false, 00:06:41.904 "compare_and_write": false, 00:06:41.904 "abort": false, 00:06:41.904 "seek_hole": false, 00:06:41.904 "seek_data": false, 00:06:41.904 "copy": false, 00:06:41.904 "nvme_iov_md": false 00:06:41.904 }, 00:06:41.904 "memory_domains": [ 00:06:41.904 { 00:06:41.904 "dma_device_id": "system", 00:06:41.904 "dma_device_type": 1 00:06:41.904 }, 00:06:41.904 { 00:06:41.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.904 "dma_device_type": 2 00:06:41.904 }, 00:06:41.904 { 00:06:41.904 "dma_device_id": "system", 00:06:41.904 "dma_device_type": 1 00:06:41.904 }, 00:06:41.904 { 00:06:41.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.904 "dma_device_type": 2 00:06:41.904 } 00:06:41.904 ], 00:06:41.904 "driver_specific": { 00:06:41.904 "raid": { 00:06:41.904 "uuid": "9b45d671-459d-40fa-b7f1-8c8d4d98d57b", 00:06:41.904 "strip_size_kb": 64, 00:06:41.904 "state": "online", 00:06:41.904 "raid_level": "concat", 00:06:41.904 "superblock": true, 00:06:41.904 "num_base_bdevs": 2, 00:06:41.904 "num_base_bdevs_discovered": 2, 00:06:41.904 "num_base_bdevs_operational": 2, 00:06:41.904 "base_bdevs_list": [ 00:06:41.904 { 00:06:41.904 "name": "BaseBdev1", 00:06:41.904 "uuid": "f5648def-ed51-4103-8071-3306afdc923e", 00:06:41.904 "is_configured": true, 00:06:41.904 "data_offset": 2048, 00:06:41.904 "data_size": 63488 00:06:41.904 }, 00:06:41.904 { 00:06:41.904 "name": "BaseBdev2", 00:06:41.904 "uuid": "f7b2bf0d-2b1f-42af-b7f7-c066aa6cc5ce", 00:06:41.904 "is_configured": true, 00:06:41.904 "data_offset": 2048, 00:06:41.904 "data_size": 63488 00:06:41.904 } 00:06:41.904 ] 00:06:41.904 } 00:06:41.904 } 00:06:41.904 }' 00:06:41.904 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:42.166 BaseBdev2' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 [2024-10-30 09:41:20.646390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:42.166 [2024-10-30 09:41:20.646419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:42.166 [2024-10-30 09:41:20.646466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.166 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.166 "name": "Existed_Raid", 00:06:42.166 "uuid": "9b45d671-459d-40fa-b7f1-8c8d4d98d57b", 00:06:42.166 "strip_size_kb": 64, 00:06:42.166 "state": "offline", 00:06:42.166 "raid_level": "concat", 00:06:42.166 "superblock": true, 00:06:42.166 "num_base_bdevs": 2, 00:06:42.166 "num_base_bdevs_discovered": 1, 00:06:42.166 "num_base_bdevs_operational": 1, 00:06:42.166 "base_bdevs_list": [ 00:06:42.166 { 00:06:42.166 "name": null, 00:06:42.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.166 "is_configured": false, 00:06:42.166 "data_offset": 0, 00:06:42.167 "data_size": 63488 00:06:42.167 }, 00:06:42.167 { 00:06:42.167 "name": "BaseBdev2", 00:06:42.167 "uuid": "f7b2bf0d-2b1f-42af-b7f7-c066aa6cc5ce", 00:06:42.167 "is_configured": true, 00:06:42.167 "data_offset": 2048, 00:06:42.167 "data_size": 63488 00:06:42.167 } 00:06:42.167 ] 00:06:42.167 }' 00:06:42.167 09:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.167 09:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.429 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.691 [2024-10-30 09:41:21.068954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:42.691 [2024-10-30 09:41:21.068999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60737 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60737 ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60737 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60737 00:06:42.691 killing process with pid 60737 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60737' 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60737 00:06:42.691 [2024-10-30 09:41:21.192739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.691 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60737 00:06:42.691 [2024-10-30 09:41:21.203134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.654 09:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:43.654 00:06:43.654 real 0m3.796s 00:06:43.654 user 0m5.530s 00:06:43.654 sys 0m0.541s 00:06:43.654 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.654 ************************************ 00:06:43.654 END TEST raid_state_function_test_sb 00:06:43.654 ************************************ 00:06:43.654 09:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.654 09:41:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:43.654 09:41:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:43.654 09:41:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.654 09:41:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.654 ************************************ 00:06:43.654 START TEST raid_superblock_test 00:06:43.654 ************************************ 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60978 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60978 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60978 ']' 00:06:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.654 09:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.654 [2024-10-30 09:41:22.045520] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:43.654 [2024-10-30 09:41:22.045641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60978 ] 00:06:43.654 [2024-10-30 09:41:22.199110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.916 [2024-10-30 09:41:22.296958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.916 [2024-10-30 09:41:22.432526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.916 [2024-10-30 09:41:22.432572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.490 malloc1 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.490 [2024-10-30 09:41:22.933774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:44.490 [2024-10-30 09:41:22.933836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.490 [2024-10-30 09:41:22.933856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:44.490 [2024-10-30 09:41:22.933866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.490 [2024-10-30 09:41:22.935994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.490 [2024-10-30 09:41:22.936033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:44.490 pt1 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.490 malloc2 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.490 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 [2024-10-30 09:41:22.973737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:44.491 [2024-10-30 09:41:22.973785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.491 [2024-10-30 09:41:22.973806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:44.491 [2024-10-30 09:41:22.973814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.491 [2024-10-30 09:41:22.975887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.491 [2024-10-30 09:41:22.976026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:44.491 pt2 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 [2024-10-30 09:41:22.981795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:44.491 [2024-10-30 09:41:22.983666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:44.491 [2024-10-30 09:41:22.983816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:44.491 [2024-10-30 09:41:22.983827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.491 [2024-10-30 09:41:22.984084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:44.491 [2024-10-30 09:41:22.984233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:44.491 [2024-10-30 09:41:22.984244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:44.491 [2024-10-30 09:41:22.984378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.491 09:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.491 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.491 "name": "raid_bdev1", 00:06:44.491 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:44.491 "strip_size_kb": 64, 00:06:44.491 "state": "online", 00:06:44.491 "raid_level": "concat", 00:06:44.491 "superblock": true, 00:06:44.491 "num_base_bdevs": 2, 00:06:44.491 "num_base_bdevs_discovered": 2, 00:06:44.491 "num_base_bdevs_operational": 2, 00:06:44.491 "base_bdevs_list": [ 00:06:44.491 { 00:06:44.491 "name": "pt1", 00:06:44.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.491 "is_configured": true, 00:06:44.491 "data_offset": 2048, 00:06:44.491 "data_size": 63488 00:06:44.491 }, 00:06:44.491 { 00:06:44.491 "name": "pt2", 00:06:44.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.491 "is_configured": true, 00:06:44.491 "data_offset": 2048, 00:06:44.491 "data_size": 63488 00:06:44.491 } 00:06:44.491 ] 00:06:44.491 }' 00:06:44.491 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.491 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.754 [2024-10-30 09:41:23.294111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:44.754 "name": "raid_bdev1", 00:06:44.754 "aliases": [ 00:06:44.754 "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f" 00:06:44.754 ], 00:06:44.754 "product_name": "Raid Volume", 00:06:44.754 "block_size": 512, 00:06:44.754 "num_blocks": 126976, 00:06:44.754 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:44.754 "assigned_rate_limits": { 00:06:44.754 "rw_ios_per_sec": 0, 00:06:44.754 "rw_mbytes_per_sec": 0, 00:06:44.754 "r_mbytes_per_sec": 0, 00:06:44.754 "w_mbytes_per_sec": 0 00:06:44.754 }, 00:06:44.754 "claimed": false, 00:06:44.754 "zoned": false, 00:06:44.754 "supported_io_types": { 00:06:44.754 "read": true, 00:06:44.754 "write": true, 00:06:44.754 "unmap": true, 00:06:44.754 "flush": true, 00:06:44.754 "reset": true, 00:06:44.754 "nvme_admin": false, 00:06:44.754 "nvme_io": false, 00:06:44.754 "nvme_io_md": false, 00:06:44.754 "write_zeroes": true, 00:06:44.754 "zcopy": false, 00:06:44.754 "get_zone_info": false, 00:06:44.754 "zone_management": false, 00:06:44.754 "zone_append": false, 00:06:44.754 "compare": false, 00:06:44.754 "compare_and_write": false, 00:06:44.754 "abort": false, 00:06:44.754 "seek_hole": false, 00:06:44.754 "seek_data": false, 00:06:44.754 "copy": false, 00:06:44.754 "nvme_iov_md": false 00:06:44.754 }, 00:06:44.754 "memory_domains": [ 00:06:44.754 { 00:06:44.754 "dma_device_id": "system", 00:06:44.754 "dma_device_type": 1 00:06:44.754 }, 00:06:44.754 { 00:06:44.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.754 "dma_device_type": 2 00:06:44.754 }, 00:06:44.754 { 00:06:44.754 "dma_device_id": "system", 00:06:44.754 "dma_device_type": 1 00:06:44.754 }, 00:06:44.754 { 00:06:44.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.754 "dma_device_type": 2 00:06:44.754 } 00:06:44.754 ], 00:06:44.754 "driver_specific": { 00:06:44.754 "raid": { 00:06:44.754 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:44.754 "strip_size_kb": 64, 00:06:44.754 "state": "online", 00:06:44.754 "raid_level": "concat", 00:06:44.754 "superblock": true, 00:06:44.754 "num_base_bdevs": 2, 00:06:44.754 "num_base_bdevs_discovered": 2, 00:06:44.754 "num_base_bdevs_operational": 2, 00:06:44.754 "base_bdevs_list": [ 00:06:44.754 { 00:06:44.754 "name": "pt1", 00:06:44.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.754 "is_configured": true, 00:06:44.754 "data_offset": 2048, 00:06:44.754 "data_size": 63488 00:06:44.754 }, 00:06:44.754 { 00:06:44.754 "name": "pt2", 00:06:44.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.754 "is_configured": true, 00:06:44.754 "data_offset": 2048, 00:06:44.754 "data_size": 63488 00:06:44.754 } 00:06:44.754 ] 00:06:44.754 } 00:06:44.754 } 00:06:44.754 }' 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:44.754 pt2' 00:06:44.754 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:45.016 [2024-10-30 09:41:23.450129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f ']' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 [2024-10-30 09:41:23.481827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:45.016 [2024-10-30 09:41:23.481848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.016 [2024-10-30 09:41:23.481916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.016 [2024-10-30 09:41:23.481964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.016 [2024-10-30 09:41:23.481976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:45.016 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.017 [2024-10-30 09:41:23.573881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:45.017 [2024-10-30 09:41:23.575717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:45.017 [2024-10-30 09:41:23.575771] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:45.017 [2024-10-30 09:41:23.575817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:45.017 [2024-10-30 09:41:23.575830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:45.017 [2024-10-30 09:41:23.575840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:45.017 request: 00:06:45.017 { 00:06:45.017 "name": "raid_bdev1", 00:06:45.017 "raid_level": "concat", 00:06:45.017 "base_bdevs": [ 00:06:45.017 "malloc1", 00:06:45.017 "malloc2" 00:06:45.017 ], 00:06:45.017 "strip_size_kb": 64, 00:06:45.017 "superblock": false, 00:06:45.017 "method": "bdev_raid_create", 00:06:45.017 "req_id": 1 00:06:45.017 } 00:06:45.017 Got JSON-RPC error response 00:06:45.017 response: 00:06:45.017 { 00:06:45.017 "code": -17, 00:06:45.017 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:45.017 } 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.017 [2024-10-30 09:41:23.617877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:45.017 [2024-10-30 09:41:23.617925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.017 [2024-10-30 09:41:23.617944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:45.017 [2024-10-30 09:41:23.617955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.017 [2024-10-30 09:41:23.620106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.017 [2024-10-30 09:41:23.620142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:45.017 [2024-10-30 09:41:23.620229] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:45.017 [2024-10-30 09:41:23.620283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:45.017 pt1 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.017 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.277 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.277 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.277 "name": "raid_bdev1", 00:06:45.277 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:45.277 "strip_size_kb": 64, 00:06:45.277 "state": "configuring", 00:06:45.277 "raid_level": "concat", 00:06:45.278 "superblock": true, 00:06:45.278 "num_base_bdevs": 2, 00:06:45.278 "num_base_bdevs_discovered": 1, 00:06:45.278 "num_base_bdevs_operational": 2, 00:06:45.278 "base_bdevs_list": [ 00:06:45.278 { 00:06:45.278 "name": "pt1", 00:06:45.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.278 "is_configured": true, 00:06:45.278 "data_offset": 2048, 00:06:45.278 "data_size": 63488 00:06:45.278 }, 00:06:45.278 { 00:06:45.278 "name": null, 00:06:45.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.278 "is_configured": false, 00:06:45.278 "data_offset": 2048, 00:06:45.278 "data_size": 63488 00:06:45.278 } 00:06:45.278 ] 00:06:45.278 }' 00:06:45.278 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.278 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.579 [2024-10-30 09:41:23.929982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:45.579 [2024-10-30 09:41:23.930044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.579 [2024-10-30 09:41:23.930077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:45.579 [2024-10-30 09:41:23.930089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.579 [2024-10-30 09:41:23.930507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.579 [2024-10-30 09:41:23.930527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:45.579 [2024-10-30 09:41:23.930594] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:45.579 [2024-10-30 09:41:23.930614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:45.579 [2024-10-30 09:41:23.930713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:45.579 [2024-10-30 09:41:23.930724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:45.579 [2024-10-30 09:41:23.930949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:45.579 [2024-10-30 09:41:23.931085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:45.579 [2024-10-30 09:41:23.931094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:45.579 [2024-10-30 09:41:23.931215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.579 pt2 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.579 "name": "raid_bdev1", 00:06:45.579 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:45.579 "strip_size_kb": 64, 00:06:45.579 "state": "online", 00:06:45.579 "raid_level": "concat", 00:06:45.579 "superblock": true, 00:06:45.579 "num_base_bdevs": 2, 00:06:45.579 "num_base_bdevs_discovered": 2, 00:06:45.579 "num_base_bdevs_operational": 2, 00:06:45.579 "base_bdevs_list": [ 00:06:45.579 { 00:06:45.579 "name": "pt1", 00:06:45.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.579 "is_configured": true, 00:06:45.579 "data_offset": 2048, 00:06:45.579 "data_size": 63488 00:06:45.579 }, 00:06:45.579 { 00:06:45.579 "name": "pt2", 00:06:45.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.579 "is_configured": true, 00:06:45.579 "data_offset": 2048, 00:06:45.579 "data_size": 63488 00:06:45.579 } 00:06:45.579 ] 00:06:45.579 }' 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.579 09:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 [2024-10-30 09:41:24.234324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.867 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:45.867 "name": "raid_bdev1", 00:06:45.867 "aliases": [ 00:06:45.867 "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f" 00:06:45.867 ], 00:06:45.867 "product_name": "Raid Volume", 00:06:45.867 "block_size": 512, 00:06:45.867 "num_blocks": 126976, 00:06:45.867 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:45.867 "assigned_rate_limits": { 00:06:45.867 "rw_ios_per_sec": 0, 00:06:45.867 "rw_mbytes_per_sec": 0, 00:06:45.867 "r_mbytes_per_sec": 0, 00:06:45.867 "w_mbytes_per_sec": 0 00:06:45.867 }, 00:06:45.867 "claimed": false, 00:06:45.867 "zoned": false, 00:06:45.868 "supported_io_types": { 00:06:45.868 "read": true, 00:06:45.868 "write": true, 00:06:45.868 "unmap": true, 00:06:45.868 "flush": true, 00:06:45.868 "reset": true, 00:06:45.868 "nvme_admin": false, 00:06:45.868 "nvme_io": false, 00:06:45.868 "nvme_io_md": false, 00:06:45.868 "write_zeroes": true, 00:06:45.868 "zcopy": false, 00:06:45.868 "get_zone_info": false, 00:06:45.868 "zone_management": false, 00:06:45.868 "zone_append": false, 00:06:45.868 "compare": false, 00:06:45.868 "compare_and_write": false, 00:06:45.868 "abort": false, 00:06:45.868 "seek_hole": false, 00:06:45.868 "seek_data": false, 00:06:45.868 "copy": false, 00:06:45.868 "nvme_iov_md": false 00:06:45.868 }, 00:06:45.868 "memory_domains": [ 00:06:45.868 { 00:06:45.868 "dma_device_id": "system", 00:06:45.868 "dma_device_type": 1 00:06:45.868 }, 00:06:45.868 { 00:06:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.868 "dma_device_type": 2 00:06:45.868 }, 00:06:45.868 { 00:06:45.868 "dma_device_id": "system", 00:06:45.868 "dma_device_type": 1 00:06:45.868 }, 00:06:45.868 { 00:06:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.868 "dma_device_type": 2 00:06:45.868 } 00:06:45.868 ], 00:06:45.868 "driver_specific": { 00:06:45.868 "raid": { 00:06:45.868 "uuid": "bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f", 00:06:45.868 "strip_size_kb": 64, 00:06:45.868 "state": "online", 00:06:45.868 "raid_level": "concat", 00:06:45.868 "superblock": true, 00:06:45.868 "num_base_bdevs": 2, 00:06:45.868 "num_base_bdevs_discovered": 2, 00:06:45.868 "num_base_bdevs_operational": 2, 00:06:45.868 "base_bdevs_list": [ 00:06:45.868 { 00:06:45.868 "name": "pt1", 00:06:45.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.868 "is_configured": true, 00:06:45.868 "data_offset": 2048, 00:06:45.868 "data_size": 63488 00:06:45.868 }, 00:06:45.868 { 00:06:45.868 "name": "pt2", 00:06:45.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.868 "is_configured": true, 00:06:45.868 "data_offset": 2048, 00:06:45.868 "data_size": 63488 00:06:45.868 } 00:06:45.868 ] 00:06:45.868 } 00:06:45.868 } 00:06:45.868 }' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:45.868 pt2' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:45.868 [2024-10-30 09:41:24.390320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f '!=' bb2208f3-1f7d-49b2-99ef-1a3aa7702c3f ']' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60978 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60978 ']' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60978 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60978 00:06:45.868 killing process with pid 60978 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60978' 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60978 00:06:45.868 [2024-10-30 09:41:24.439903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.868 09:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60978 00:06:45.868 [2024-10-30 09:41:24.439978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.868 [2024-10-30 09:41:24.440024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.868 [2024-10-30 09:41:24.440035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:46.130 [2024-10-30 09:41:24.567525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.704 ************************************ 00:06:46.704 END TEST raid_superblock_test 00:06:46.704 ************************************ 00:06:46.704 09:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:46.704 00:06:46.704 real 0m3.292s 00:06:46.704 user 0m4.618s 00:06:46.704 sys 0m0.488s 00:06:46.704 09:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.704 09:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.965 09:41:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:06:46.965 09:41:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:46.965 09:41:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.965 09:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.965 ************************************ 00:06:46.965 START TEST raid_read_error_test 00:06:46.965 ************************************ 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:46.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8yNYlNlqmb 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61173 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61173 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61173 ']' 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.965 09:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:46.965 [2024-10-30 09:41:25.418799] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:46.965 [2024-10-30 09:41:25.418920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:06:46.965 [2024-10-30 09:41:25.571668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.226 [2024-10-30 09:41:25.675400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.226 [2024-10-30 09:41:25.813999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.226 [2024-10-30 09:41:25.814054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 BaseBdev1_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 true 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 [2024-10-30 09:41:26.301130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:47.800 [2024-10-30 09:41:26.301286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.800 [2024-10-30 09:41:26.301312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:47.800 [2024-10-30 09:41:26.301324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.800 [2024-10-30 09:41:26.303464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.800 [2024-10-30 09:41:26.303502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:47.800 BaseBdev1 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 BaseBdev2_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 true 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 [2024-10-30 09:41:26.345110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:47.800 [2024-10-30 09:41:26.345160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.800 [2024-10-30 09:41:26.345175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:47.800 [2024-10-30 09:41:26.345185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.800 [2024-10-30 09:41:26.347272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.800 [2024-10-30 09:41:26.347307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:47.800 BaseBdev2 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.800 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 [2024-10-30 09:41:26.353178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.800 [2024-10-30 09:41:26.355013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.800 [2024-10-30 09:41:26.355208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:47.800 [2024-10-30 09:41:26.355223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:47.800 [2024-10-30 09:41:26.355457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.800 [2024-10-30 09:41:26.355603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:47.800 [2024-10-30 09:41:26.355613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:47.800 [2024-10-30 09:41:26.355748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.801 "name": "raid_bdev1", 00:06:47.801 "uuid": "c0d6494a-2ad1-4ca6-930f-e670da3f361c", 00:06:47.801 "strip_size_kb": 64, 00:06:47.801 "state": "online", 00:06:47.801 "raid_level": "concat", 00:06:47.801 "superblock": true, 00:06:47.801 "num_base_bdevs": 2, 00:06:47.801 "num_base_bdevs_discovered": 2, 00:06:47.801 "num_base_bdevs_operational": 2, 00:06:47.801 "base_bdevs_list": [ 00:06:47.801 { 00:06:47.801 "name": "BaseBdev1", 00:06:47.801 "uuid": "eeb583c3-b6bc-5ae1-9daa-62a144ff5dd2", 00:06:47.801 "is_configured": true, 00:06:47.801 "data_offset": 2048, 00:06:47.801 "data_size": 63488 00:06:47.801 }, 00:06:47.801 { 00:06:47.801 "name": "BaseBdev2", 00:06:47.801 "uuid": "d8897bdd-ce8f-5577-ade1-1cbcaba41793", 00:06:47.801 "is_configured": true, 00:06:47.801 "data_offset": 2048, 00:06:47.801 "data_size": 63488 00:06:47.801 } 00:06:47.801 ] 00:06:47.801 }' 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.801 09:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.065 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:48.065 09:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:48.326 [2024-10-30 09:41:26.750184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.314 "name": "raid_bdev1", 00:06:49.314 "uuid": "c0d6494a-2ad1-4ca6-930f-e670da3f361c", 00:06:49.314 "strip_size_kb": 64, 00:06:49.314 "state": "online", 00:06:49.314 "raid_level": "concat", 00:06:49.314 "superblock": true, 00:06:49.314 "num_base_bdevs": 2, 00:06:49.314 "num_base_bdevs_discovered": 2, 00:06:49.314 "num_base_bdevs_operational": 2, 00:06:49.314 "base_bdevs_list": [ 00:06:49.314 { 00:06:49.314 "name": "BaseBdev1", 00:06:49.314 "uuid": "eeb583c3-b6bc-5ae1-9daa-62a144ff5dd2", 00:06:49.314 "is_configured": true, 00:06:49.314 "data_offset": 2048, 00:06:49.314 "data_size": 63488 00:06:49.314 }, 00:06:49.314 { 00:06:49.314 "name": "BaseBdev2", 00:06:49.314 "uuid": "d8897bdd-ce8f-5577-ade1-1cbcaba41793", 00:06:49.314 "is_configured": true, 00:06:49.314 "data_offset": 2048, 00:06:49.314 "data_size": 63488 00:06:49.314 } 00:06:49.314 ] 00:06:49.314 }' 00:06:49.314 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.315 09:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.576 09:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.576 [2024-10-30 09:41:28.004438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:49.576 [2024-10-30 09:41:28.004471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.576 [2024-10-30 09:41:28.007809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.576 [2024-10-30 09:41:28.007857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.576 [2024-10-30 09:41:28.007889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.576 [2024-10-30 09:41:28.007903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:49.576 { 00:06:49.576 "results": [ 00:06:49.576 { 00:06:49.576 "job": "raid_bdev1", 00:06:49.576 "core_mask": "0x1", 00:06:49.576 "workload": "randrw", 00:06:49.576 "percentage": 50, 00:06:49.576 "status": "finished", 00:06:49.576 "queue_depth": 1, 00:06:49.576 "io_size": 131072, 00:06:49.576 "runtime": 1.252394, 00:06:49.576 "iops": 15012.847394669729, 00:06:49.576 "mibps": 1876.605924333716, 00:06:49.576 "io_failed": 1, 00:06:49.576 "io_timeout": 0, 00:06:49.576 "avg_latency_us": 91.07980314106996, 00:06:49.576 "min_latency_us": 33.28, 00:06:49.576 "max_latency_us": 1739.2246153846154 00:06:49.576 } 00:06:49.576 ], 00:06:49.576 "core_count": 1 00:06:49.576 } 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61173 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61173 ']' 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61173 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61173 00:06:49.576 killing process with pid 61173 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61173' 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61173 00:06:49.576 [2024-10-30 09:41:28.034087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.576 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61173 00:06:49.576 [2024-10-30 09:41:28.117100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8yNYlNlqmb 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:50.520 ************************************ 00:06:50.520 END TEST raid_read_error_test 00:06:50.520 ************************************ 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:06:50.520 00:06:50.520 real 0m3.516s 00:06:50.520 user 0m4.194s 00:06:50.520 sys 0m0.387s 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.520 09:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.520 09:41:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:06:50.520 09:41:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:50.520 09:41:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.520 09:41:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.520 ************************************ 00:06:50.520 START TEST raid_write_error_test 00:06:50.520 ************************************ 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qvPRplOzOu 00:06:50.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61313 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61313 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61313 ']' 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.520 09:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.520 [2024-10-30 09:41:28.998617] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:50.520 [2024-10-30 09:41:28.998735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:06:50.782 [2024-10-30 09:41:29.157914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.782 [2024-10-30 09:41:29.258581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.782 [2024-10-30 09:41:29.396590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.782 [2024-10-30 09:41:29.396635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 BaseBdev1_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 true 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 [2024-10-30 09:41:29.882229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:51.353 [2024-10-30 09:41:29.882384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.353 [2024-10-30 09:41:29.882427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:51.353 [2024-10-30 09:41:29.882488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.353 [2024-10-30 09:41:29.884668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.353 [2024-10-30 09:41:29.884795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:51.353 BaseBdev1 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 BaseBdev2_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 true 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 [2024-10-30 09:41:29.930194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:51.353 [2024-10-30 09:41:29.930241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.353 [2024-10-30 09:41:29.930257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:51.353 [2024-10-30 09:41:29.930267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.353 [2024-10-30 09:41:29.932383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.353 [2024-10-30 09:41:29.932420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:51.353 BaseBdev2 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 [2024-10-30 09:41:29.938259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.353 [2024-10-30 09:41:29.940181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.353 [2024-10-30 09:41:29.940370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:51.353 [2024-10-30 09:41:29.940384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:51.353 [2024-10-30 09:41:29.940621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.353 [2024-10-30 09:41:29.940766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:51.353 [2024-10-30 09:41:29.940776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:51.353 [2024-10-30 09:41:29.940910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:51.353 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.613 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.613 "name": "raid_bdev1", 00:06:51.613 "uuid": "0bab47cc-706c-4b9c-8981-7c176a74c4fe", 00:06:51.613 "strip_size_kb": 64, 00:06:51.613 "state": "online", 00:06:51.613 "raid_level": "concat", 00:06:51.613 "superblock": true, 00:06:51.613 "num_base_bdevs": 2, 00:06:51.613 "num_base_bdevs_discovered": 2, 00:06:51.613 "num_base_bdevs_operational": 2, 00:06:51.613 "base_bdevs_list": [ 00:06:51.613 { 00:06:51.613 "name": "BaseBdev1", 00:06:51.613 "uuid": "9f53bc92-1342-5211-83a7-4d431a44bffb", 00:06:51.613 "is_configured": true, 00:06:51.613 "data_offset": 2048, 00:06:51.613 "data_size": 63488 00:06:51.613 }, 00:06:51.613 { 00:06:51.613 "name": "BaseBdev2", 00:06:51.613 "uuid": "0703334f-bde2-53b4-b708-01a0850dbc95", 00:06:51.613 "is_configured": true, 00:06:51.613 "data_offset": 2048, 00:06:51.613 "data_size": 63488 00:06:51.613 } 00:06:51.613 ] 00:06:51.613 }' 00:06:51.613 09:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.613 09:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.874 09:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:51.874 09:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:51.874 [2024-10-30 09:41:30.335346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.844 "name": "raid_bdev1", 00:06:52.844 "uuid": "0bab47cc-706c-4b9c-8981-7c176a74c4fe", 00:06:52.844 "strip_size_kb": 64, 00:06:52.844 "state": "online", 00:06:52.844 "raid_level": "concat", 00:06:52.844 "superblock": true, 00:06:52.844 "num_base_bdevs": 2, 00:06:52.844 "num_base_bdevs_discovered": 2, 00:06:52.844 "num_base_bdevs_operational": 2, 00:06:52.844 "base_bdevs_list": [ 00:06:52.844 { 00:06:52.844 "name": "BaseBdev1", 00:06:52.844 "uuid": "9f53bc92-1342-5211-83a7-4d431a44bffb", 00:06:52.844 "is_configured": true, 00:06:52.844 "data_offset": 2048, 00:06:52.844 "data_size": 63488 00:06:52.844 }, 00:06:52.844 { 00:06:52.844 "name": "BaseBdev2", 00:06:52.844 "uuid": "0703334f-bde2-53b4-b708-01a0850dbc95", 00:06:52.844 "is_configured": true, 00:06:52.844 "data_offset": 2048, 00:06:52.844 "data_size": 63488 00:06:52.844 } 00:06:52.844 ] 00:06:52.844 }' 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.844 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.106 [2024-10-30 09:41:31.589286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:53.106 [2024-10-30 09:41:31.589440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.106 [2024-10-30 09:41:31.592542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.106 [2024-10-30 09:41:31.592679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.106 [2024-10-30 09:41:31.592719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.106 [2024-10-30 09:41:31.592733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:53.106 { 00:06:53.106 "results": [ 00:06:53.106 { 00:06:53.106 "job": "raid_bdev1", 00:06:53.106 "core_mask": "0x1", 00:06:53.106 "workload": "randrw", 00:06:53.106 "percentage": 50, 00:06:53.106 "status": "finished", 00:06:53.106 "queue_depth": 1, 00:06:53.106 "io_size": 131072, 00:06:53.106 "runtime": 1.252226, 00:06:53.106 "iops": 15168.18848993712, 00:06:53.106 "mibps": 1896.02356124214, 00:06:53.106 "io_failed": 1, 00:06:53.106 "io_timeout": 0, 00:06:53.106 "avg_latency_us": 90.10222382408327, 00:06:53.106 "min_latency_us": 33.28, 00:06:53.106 "max_latency_us": 1739.2246153846154 00:06:53.106 } 00:06:53.106 ], 00:06:53.106 "core_count": 1 00:06:53.106 } 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61313 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61313 ']' 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61313 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61313 00:06:53.106 killing process with pid 61313 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61313' 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61313 00:06:53.106 09:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61313 00:06:53.106 [2024-10-30 09:41:31.622264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.106 [2024-10-30 09:41:31.706423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qvPRplOzOu 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:06:54.051 ************************************ 00:06:54.051 END TEST raid_write_error_test 00:06:54.051 ************************************ 00:06:54.051 00:06:54.051 real 0m3.526s 00:06:54.051 user 0m4.217s 00:06:54.051 sys 0m0.384s 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.051 09:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.051 09:41:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:54.051 09:41:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:06:54.051 09:41:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:54.051 09:41:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.051 09:41:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.051 ************************************ 00:06:54.051 START TEST raid_state_function_test 00:06:54.051 ************************************ 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.051 Process raid pid: 61440 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61440 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61440' 00:06:54.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61440 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61440 ']' 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.051 09:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.051 [2024-10-30 09:41:32.587215] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:54.051 [2024-10-30 09:41:32.587349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.312 [2024-10-30 09:41:32.742213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.312 [2024-10-30 09:41:32.842732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.574 [2024-10-30 09:41:32.981030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.574 [2024-10-30 09:41:32.981081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.836 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.836 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:06:54.836 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.836 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.836 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.836 [2024-10-30 09:41:33.432124] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.836 [2024-10-30 09:41:33.432175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.837 [2024-10-30 09:41:33.432185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.837 [2024-10-30 09:41:33.432195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.837 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.098 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.098 "name": "Existed_Raid", 00:06:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.098 "strip_size_kb": 0, 00:06:55.098 "state": "configuring", 00:06:55.098 "raid_level": "raid1", 00:06:55.098 "superblock": false, 00:06:55.098 "num_base_bdevs": 2, 00:06:55.098 "num_base_bdevs_discovered": 0, 00:06:55.098 "num_base_bdevs_operational": 2, 00:06:55.098 "base_bdevs_list": [ 00:06:55.098 { 00:06:55.098 "name": "BaseBdev1", 00:06:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.098 "is_configured": false, 00:06:55.098 "data_offset": 0, 00:06:55.098 "data_size": 0 00:06:55.098 }, 00:06:55.098 { 00:06:55.098 "name": "BaseBdev2", 00:06:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.098 "is_configured": false, 00:06:55.098 "data_offset": 0, 00:06:55.098 "data_size": 0 00:06:55.098 } 00:06:55.098 ] 00:06:55.098 }' 00:06:55.098 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.098 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 [2024-10-30 09:41:33.760160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.462 [2024-10-30 09:41:33.760191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 [2024-10-30 09:41:33.768156] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.462 [2024-10-30 09:41:33.768198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.462 [2024-10-30 09:41:33.768206] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.462 [2024-10-30 09:41:33.768217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 BaseBdev1 00:06:55.462 [2024-10-30 09:41:33.800856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 [ 00:06:55.462 { 00:06:55.462 "name": "BaseBdev1", 00:06:55.462 "aliases": [ 00:06:55.462 "97f1b246-fefc-46f7-a99d-109ad96e3417" 00:06:55.462 ], 00:06:55.462 "product_name": "Malloc disk", 00:06:55.462 "block_size": 512, 00:06:55.462 "num_blocks": 65536, 00:06:55.462 "uuid": "97f1b246-fefc-46f7-a99d-109ad96e3417", 00:06:55.462 "assigned_rate_limits": { 00:06:55.462 "rw_ios_per_sec": 0, 00:06:55.462 "rw_mbytes_per_sec": 0, 00:06:55.462 "r_mbytes_per_sec": 0, 00:06:55.462 "w_mbytes_per_sec": 0 00:06:55.462 }, 00:06:55.462 "claimed": true, 00:06:55.462 "claim_type": "exclusive_write", 00:06:55.462 "zoned": false, 00:06:55.462 "supported_io_types": { 00:06:55.462 "read": true, 00:06:55.462 "write": true, 00:06:55.462 "unmap": true, 00:06:55.462 "flush": true, 00:06:55.462 "reset": true, 00:06:55.462 "nvme_admin": false, 00:06:55.462 "nvme_io": false, 00:06:55.462 "nvme_io_md": false, 00:06:55.462 "write_zeroes": true, 00:06:55.462 "zcopy": true, 00:06:55.462 "get_zone_info": false, 00:06:55.462 "zone_management": false, 00:06:55.462 "zone_append": false, 00:06:55.462 "compare": false, 00:06:55.462 "compare_and_write": false, 00:06:55.462 "abort": true, 00:06:55.462 "seek_hole": false, 00:06:55.462 "seek_data": false, 00:06:55.462 "copy": true, 00:06:55.462 "nvme_iov_md": false 00:06:55.462 }, 00:06:55.462 "memory_domains": [ 00:06:55.462 { 00:06:55.462 "dma_device_id": "system", 00:06:55.462 "dma_device_type": 1 00:06:55.462 }, 00:06:55.462 { 00:06:55.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.462 "dma_device_type": 2 00:06:55.462 } 00:06:55.462 ], 00:06:55.462 "driver_specific": {} 00:06:55.462 } 00:06:55.462 ] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.462 "name": "Existed_Raid", 00:06:55.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.462 "strip_size_kb": 0, 00:06:55.462 "state": "configuring", 00:06:55.462 "raid_level": "raid1", 00:06:55.462 "superblock": false, 00:06:55.462 "num_base_bdevs": 2, 00:06:55.462 "num_base_bdevs_discovered": 1, 00:06:55.462 "num_base_bdevs_operational": 2, 00:06:55.462 "base_bdevs_list": [ 00:06:55.462 { 00:06:55.462 "name": "BaseBdev1", 00:06:55.462 "uuid": "97f1b246-fefc-46f7-a99d-109ad96e3417", 00:06:55.462 "is_configured": true, 00:06:55.462 "data_offset": 0, 00:06:55.462 "data_size": 65536 00:06:55.462 }, 00:06:55.462 { 00:06:55.462 "name": "BaseBdev2", 00:06:55.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.462 "is_configured": false, 00:06:55.462 "data_offset": 0, 00:06:55.462 "data_size": 0 00:06:55.462 } 00:06:55.462 ] 00:06:55.462 }' 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.462 09:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 [2024-10-30 09:41:34.128974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.723 [2024-10-30 09:41:34.129022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 [2024-10-30 09:41:34.137017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.723 [2024-10-30 09:41:34.138886] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.723 [2024-10-30 09:41:34.138930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.723 "name": "Existed_Raid", 00:06:55.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.723 "strip_size_kb": 0, 00:06:55.723 "state": "configuring", 00:06:55.723 "raid_level": "raid1", 00:06:55.723 "superblock": false, 00:06:55.723 "num_base_bdevs": 2, 00:06:55.723 "num_base_bdevs_discovered": 1, 00:06:55.723 "num_base_bdevs_operational": 2, 00:06:55.723 "base_bdevs_list": [ 00:06:55.723 { 00:06:55.723 "name": "BaseBdev1", 00:06:55.723 "uuid": "97f1b246-fefc-46f7-a99d-109ad96e3417", 00:06:55.723 "is_configured": true, 00:06:55.723 "data_offset": 0, 00:06:55.723 "data_size": 65536 00:06:55.723 }, 00:06:55.723 { 00:06:55.723 "name": "BaseBdev2", 00:06:55.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.723 "is_configured": false, 00:06:55.723 "data_offset": 0, 00:06:55.723 "data_size": 0 00:06:55.723 } 00:06:55.723 ] 00:06:55.723 }' 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.723 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.985 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.985 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.985 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.985 [2024-10-30 09:41:34.495856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.985 [2024-10-30 09:41:34.496042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:55.985 [2024-10-30 09:41:34.496080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:55.985 [2024-10-30 09:41:34.496373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.985 [2024-10-30 09:41:34.496530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:55.985 [2024-10-30 09:41:34.496541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:55.986 [2024-10-30 09:41:34.496776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.986 BaseBdev2 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.986 [ 00:06:55.986 { 00:06:55.986 "name": "BaseBdev2", 00:06:55.986 "aliases": [ 00:06:55.986 "6f00e8cb-a87b-4b71-b90c-692fa76230f3" 00:06:55.986 ], 00:06:55.986 "product_name": "Malloc disk", 00:06:55.986 "block_size": 512, 00:06:55.986 "num_blocks": 65536, 00:06:55.986 "uuid": "6f00e8cb-a87b-4b71-b90c-692fa76230f3", 00:06:55.986 "assigned_rate_limits": { 00:06:55.986 "rw_ios_per_sec": 0, 00:06:55.986 "rw_mbytes_per_sec": 0, 00:06:55.986 "r_mbytes_per_sec": 0, 00:06:55.986 "w_mbytes_per_sec": 0 00:06:55.986 }, 00:06:55.986 "claimed": true, 00:06:55.986 "claim_type": "exclusive_write", 00:06:55.986 "zoned": false, 00:06:55.986 "supported_io_types": { 00:06:55.986 "read": true, 00:06:55.986 "write": true, 00:06:55.986 "unmap": true, 00:06:55.986 "flush": true, 00:06:55.986 "reset": true, 00:06:55.986 "nvme_admin": false, 00:06:55.986 "nvme_io": false, 00:06:55.986 "nvme_io_md": false, 00:06:55.986 "write_zeroes": true, 00:06:55.986 "zcopy": true, 00:06:55.986 "get_zone_info": false, 00:06:55.986 "zone_management": false, 00:06:55.986 "zone_append": false, 00:06:55.986 "compare": false, 00:06:55.986 "compare_and_write": false, 00:06:55.986 "abort": true, 00:06:55.986 "seek_hole": false, 00:06:55.986 "seek_data": false, 00:06:55.986 "copy": true, 00:06:55.986 "nvme_iov_md": false 00:06:55.986 }, 00:06:55.986 "memory_domains": [ 00:06:55.986 { 00:06:55.986 "dma_device_id": "system", 00:06:55.986 "dma_device_type": 1 00:06:55.986 }, 00:06:55.986 { 00:06:55.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.986 "dma_device_type": 2 00:06:55.986 } 00:06:55.986 ], 00:06:55.986 "driver_specific": {} 00:06:55.986 } 00:06:55.986 ] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.986 "name": "Existed_Raid", 00:06:55.986 "uuid": "6fd069a5-0773-4fd9-adb1-71da5730fbf9", 00:06:55.986 "strip_size_kb": 0, 00:06:55.986 "state": "online", 00:06:55.986 "raid_level": "raid1", 00:06:55.986 "superblock": false, 00:06:55.986 "num_base_bdevs": 2, 00:06:55.986 "num_base_bdevs_discovered": 2, 00:06:55.986 "num_base_bdevs_operational": 2, 00:06:55.986 "base_bdevs_list": [ 00:06:55.986 { 00:06:55.986 "name": "BaseBdev1", 00:06:55.986 "uuid": "97f1b246-fefc-46f7-a99d-109ad96e3417", 00:06:55.986 "is_configured": true, 00:06:55.986 "data_offset": 0, 00:06:55.986 "data_size": 65536 00:06:55.986 }, 00:06:55.986 { 00:06:55.986 "name": "BaseBdev2", 00:06:55.986 "uuid": "6f00e8cb-a87b-4b71-b90c-692fa76230f3", 00:06:55.986 "is_configured": true, 00:06:55.986 "data_offset": 0, 00:06:55.986 "data_size": 65536 00:06:55.986 } 00:06:55.986 ] 00:06:55.986 }' 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.986 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.249 [2024-10-30 09:41:34.848303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.249 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.511 "name": "Existed_Raid", 00:06:56.511 "aliases": [ 00:06:56.511 "6fd069a5-0773-4fd9-adb1-71da5730fbf9" 00:06:56.511 ], 00:06:56.511 "product_name": "Raid Volume", 00:06:56.511 "block_size": 512, 00:06:56.511 "num_blocks": 65536, 00:06:56.511 "uuid": "6fd069a5-0773-4fd9-adb1-71da5730fbf9", 00:06:56.511 "assigned_rate_limits": { 00:06:56.511 "rw_ios_per_sec": 0, 00:06:56.511 "rw_mbytes_per_sec": 0, 00:06:56.511 "r_mbytes_per_sec": 0, 00:06:56.511 "w_mbytes_per_sec": 0 00:06:56.511 }, 00:06:56.511 "claimed": false, 00:06:56.511 "zoned": false, 00:06:56.511 "supported_io_types": { 00:06:56.511 "read": true, 00:06:56.511 "write": true, 00:06:56.511 "unmap": false, 00:06:56.511 "flush": false, 00:06:56.511 "reset": true, 00:06:56.511 "nvme_admin": false, 00:06:56.511 "nvme_io": false, 00:06:56.511 "nvme_io_md": false, 00:06:56.511 "write_zeroes": true, 00:06:56.511 "zcopy": false, 00:06:56.511 "get_zone_info": false, 00:06:56.511 "zone_management": false, 00:06:56.511 "zone_append": false, 00:06:56.511 "compare": false, 00:06:56.511 "compare_and_write": false, 00:06:56.511 "abort": false, 00:06:56.511 "seek_hole": false, 00:06:56.511 "seek_data": false, 00:06:56.511 "copy": false, 00:06:56.511 "nvme_iov_md": false 00:06:56.511 }, 00:06:56.511 "memory_domains": [ 00:06:56.511 { 00:06:56.511 "dma_device_id": "system", 00:06:56.511 "dma_device_type": 1 00:06:56.511 }, 00:06:56.511 { 00:06:56.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.511 "dma_device_type": 2 00:06:56.511 }, 00:06:56.511 { 00:06:56.511 "dma_device_id": "system", 00:06:56.511 "dma_device_type": 1 00:06:56.511 }, 00:06:56.511 { 00:06:56.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.511 "dma_device_type": 2 00:06:56.511 } 00:06:56.511 ], 00:06:56.511 "driver_specific": { 00:06:56.511 "raid": { 00:06:56.511 "uuid": "6fd069a5-0773-4fd9-adb1-71da5730fbf9", 00:06:56.511 "strip_size_kb": 0, 00:06:56.511 "state": "online", 00:06:56.511 "raid_level": "raid1", 00:06:56.511 "superblock": false, 00:06:56.511 "num_base_bdevs": 2, 00:06:56.511 "num_base_bdevs_discovered": 2, 00:06:56.511 "num_base_bdevs_operational": 2, 00:06:56.511 "base_bdevs_list": [ 00:06:56.511 { 00:06:56.511 "name": "BaseBdev1", 00:06:56.511 "uuid": "97f1b246-fefc-46f7-a99d-109ad96e3417", 00:06:56.511 "is_configured": true, 00:06:56.511 "data_offset": 0, 00:06:56.511 "data_size": 65536 00:06:56.511 }, 00:06:56.511 { 00:06:56.511 "name": "BaseBdev2", 00:06:56.511 "uuid": "6f00e8cb-a87b-4b71-b90c-692fa76230f3", 00:06:56.511 "is_configured": true, 00:06:56.511 "data_offset": 0, 00:06:56.511 "data_size": 65536 00:06:56.511 } 00:06:56.511 ] 00:06:56.511 } 00:06:56.511 } 00:06:56.511 }' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.511 BaseBdev2' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.511 09:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.511 [2024-10-30 09:41:35.004071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.511 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.512 "name": "Existed_Raid", 00:06:56.512 "uuid": "6fd069a5-0773-4fd9-adb1-71da5730fbf9", 00:06:56.512 "strip_size_kb": 0, 00:06:56.512 "state": "online", 00:06:56.512 "raid_level": "raid1", 00:06:56.512 "superblock": false, 00:06:56.512 "num_base_bdevs": 2, 00:06:56.512 "num_base_bdevs_discovered": 1, 00:06:56.512 "num_base_bdevs_operational": 1, 00:06:56.512 "base_bdevs_list": [ 00:06:56.512 { 00:06:56.512 "name": null, 00:06:56.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.512 "is_configured": false, 00:06:56.512 "data_offset": 0, 00:06:56.512 "data_size": 65536 00:06:56.512 }, 00:06:56.512 { 00:06:56.512 "name": "BaseBdev2", 00:06:56.512 "uuid": "6f00e8cb-a87b-4b71-b90c-692fa76230f3", 00:06:56.512 "is_configured": true, 00:06:56.512 "data_offset": 0, 00:06:56.512 "data_size": 65536 00:06:56.512 } 00:06:56.512 ] 00:06:56.512 }' 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.512 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.773 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.035 [2024-10-30 09:41:35.423831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.035 [2024-10-30 09:41:35.423926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.035 [2024-10-30 09:41:35.484021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.035 [2024-10-30 09:41:35.484077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.035 [2024-10-30 09:41:35.484089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61440 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61440 ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61440 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61440 00:06:57.035 killing process with pid 61440 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61440' 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61440 00:06:57.035 [2024-10-30 09:41:35.547840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.035 09:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61440 00:06:57.035 [2024-10-30 09:41:35.558401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:57.980 00:06:57.980 real 0m3.751s 00:06:57.980 user 0m5.430s 00:06:57.980 sys 0m0.564s 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.980 ************************************ 00:06:57.980 END TEST raid_state_function_test 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.980 ************************************ 00:06:57.980 09:41:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:06:57.980 09:41:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:57.980 09:41:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.980 09:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.980 ************************************ 00:06:57.980 START TEST raid_state_function_test_sb 00:06:57.980 ************************************ 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61682 00:06:57.980 Process raid pid: 61682 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61682' 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61682 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61682 ']' 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.980 09:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.980 [2024-10-30 09:41:36.398142] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:06:57.980 [2024-10-30 09:41:36.398255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.980 [2024-10-30 09:41:36.562895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.241 [2024-10-30 09:41:36.666120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.241 [2024-10-30 09:41:36.805335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.241 [2024-10-30 09:41:36.805375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.853 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.853 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:06:58.853 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.853 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.854 [2024-10-30 09:41:37.256075] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.854 [2024-10-30 09:41:37.256126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.854 [2024-10-30 09:41:37.256137] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.854 [2024-10-30 09:41:37.256148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.854 "name": "Existed_Raid", 00:06:58.854 "uuid": "8231e81c-2048-4e9d-8222-8e4e61a77ade", 00:06:58.854 "strip_size_kb": 0, 00:06:58.854 "state": "configuring", 00:06:58.854 "raid_level": "raid1", 00:06:58.854 "superblock": true, 00:06:58.854 "num_base_bdevs": 2, 00:06:58.854 "num_base_bdevs_discovered": 0, 00:06:58.854 "num_base_bdevs_operational": 2, 00:06:58.854 "base_bdevs_list": [ 00:06:58.854 { 00:06:58.854 "name": "BaseBdev1", 00:06:58.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.854 "is_configured": false, 00:06:58.854 "data_offset": 0, 00:06:58.854 "data_size": 0 00:06:58.854 }, 00:06:58.854 { 00:06:58.854 "name": "BaseBdev2", 00:06:58.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.854 "is_configured": false, 00:06:58.854 "data_offset": 0, 00:06:58.854 "data_size": 0 00:06:58.854 } 00:06:58.854 ] 00:06:58.854 }' 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.854 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 [2024-10-30 09:41:37.580117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.116 [2024-10-30 09:41:37.580152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 [2024-10-30 09:41:37.588106] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.116 [2024-10-30 09:41:37.588152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.116 [2024-10-30 09:41:37.588164] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.116 [2024-10-30 09:41:37.588179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 [2024-10-30 09:41:37.621719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.116 BaseBdev1 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 [ 00:06:59.116 { 00:06:59.116 "name": "BaseBdev1", 00:06:59.116 "aliases": [ 00:06:59.116 "4ad9195c-c698-44e9-8ca8-0d2a529f0f86" 00:06:59.116 ], 00:06:59.116 "product_name": "Malloc disk", 00:06:59.116 "block_size": 512, 00:06:59.116 "num_blocks": 65536, 00:06:59.116 "uuid": "4ad9195c-c698-44e9-8ca8-0d2a529f0f86", 00:06:59.116 "assigned_rate_limits": { 00:06:59.116 "rw_ios_per_sec": 0, 00:06:59.116 "rw_mbytes_per_sec": 0, 00:06:59.116 "r_mbytes_per_sec": 0, 00:06:59.116 "w_mbytes_per_sec": 0 00:06:59.116 }, 00:06:59.116 "claimed": true, 00:06:59.116 "claim_type": "exclusive_write", 00:06:59.116 "zoned": false, 00:06:59.116 "supported_io_types": { 00:06:59.116 "read": true, 00:06:59.116 "write": true, 00:06:59.116 "unmap": true, 00:06:59.116 "flush": true, 00:06:59.116 "reset": true, 00:06:59.116 "nvme_admin": false, 00:06:59.116 "nvme_io": false, 00:06:59.116 "nvme_io_md": false, 00:06:59.116 "write_zeroes": true, 00:06:59.116 "zcopy": true, 00:06:59.116 "get_zone_info": false, 00:06:59.116 "zone_management": false, 00:06:59.116 "zone_append": false, 00:06:59.116 "compare": false, 00:06:59.116 "compare_and_write": false, 00:06:59.116 "abort": true, 00:06:59.116 "seek_hole": false, 00:06:59.116 "seek_data": false, 00:06:59.116 "copy": true, 00:06:59.116 "nvme_iov_md": false 00:06:59.116 }, 00:06:59.116 "memory_domains": [ 00:06:59.116 { 00:06:59.116 "dma_device_id": "system", 00:06:59.116 "dma_device_type": 1 00:06:59.116 }, 00:06:59.116 { 00:06:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.116 "dma_device_type": 2 00:06:59.116 } 00:06:59.116 ], 00:06:59.116 "driver_specific": {} 00:06:59.116 } 00:06:59.116 ] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.116 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.116 "name": "Existed_Raid", 00:06:59.116 "uuid": "f3d63c1a-40b0-4f7d-b6f5-2e1f0e7be360", 00:06:59.116 "strip_size_kb": 0, 00:06:59.116 "state": "configuring", 00:06:59.116 "raid_level": "raid1", 00:06:59.116 "superblock": true, 00:06:59.116 "num_base_bdevs": 2, 00:06:59.116 "num_base_bdevs_discovered": 1, 00:06:59.116 "num_base_bdevs_operational": 2, 00:06:59.116 "base_bdevs_list": [ 00:06:59.117 { 00:06:59.117 "name": "BaseBdev1", 00:06:59.117 "uuid": "4ad9195c-c698-44e9-8ca8-0d2a529f0f86", 00:06:59.117 "is_configured": true, 00:06:59.117 "data_offset": 2048, 00:06:59.117 "data_size": 63488 00:06:59.117 }, 00:06:59.117 { 00:06:59.117 "name": "BaseBdev2", 00:06:59.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.117 "is_configured": false, 00:06:59.117 "data_offset": 0, 00:06:59.117 "data_size": 0 00:06:59.117 } 00:06:59.117 ] 00:06:59.117 }' 00:06:59.117 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.117 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 [2024-10-30 09:41:37.953839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.379 [2024-10-30 09:41:37.953888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 [2024-10-30 09:41:37.961887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.379 [2024-10-30 09:41:37.963719] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.379 [2024-10-30 09:41:37.963760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.640 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.640 "name": "Existed_Raid", 00:06:59.640 "uuid": "81c63123-7145-4173-b7da-cae745ee813d", 00:06:59.640 "strip_size_kb": 0, 00:06:59.640 "state": "configuring", 00:06:59.640 "raid_level": "raid1", 00:06:59.640 "superblock": true, 00:06:59.640 "num_base_bdevs": 2, 00:06:59.640 "num_base_bdevs_discovered": 1, 00:06:59.640 "num_base_bdevs_operational": 2, 00:06:59.640 "base_bdevs_list": [ 00:06:59.640 { 00:06:59.640 "name": "BaseBdev1", 00:06:59.640 "uuid": "4ad9195c-c698-44e9-8ca8-0d2a529f0f86", 00:06:59.640 "is_configured": true, 00:06:59.640 "data_offset": 2048, 00:06:59.640 "data_size": 63488 00:06:59.640 }, 00:06:59.640 { 00:06:59.640 "name": "BaseBdev2", 00:06:59.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.640 "is_configured": false, 00:06:59.640 "data_offset": 0, 00:06:59.640 "data_size": 0 00:06:59.640 } 00:06:59.640 ] 00:06:59.640 }' 00:06:59.640 09:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.640 09:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.901 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:59.901 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.901 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.901 [2024-10-30 09:41:38.316768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.901 [2024-10-30 09:41:38.316976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:59.901 [2024-10-30 09:41:38.316988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:59.901 BaseBdev2 00:06:59.901 [2024-10-30 09:41:38.317255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:59.902 [2024-10-30 09:41:38.317394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:59.902 [2024-10-30 09:41:38.317412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:59.902 [2024-10-30 09:41:38.317541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.902 [ 00:06:59.902 { 00:06:59.902 "name": "BaseBdev2", 00:06:59.902 "aliases": [ 00:06:59.902 "81971cff-79e9-4e95-b8a1-d0820160049b" 00:06:59.902 ], 00:06:59.902 "product_name": "Malloc disk", 00:06:59.902 "block_size": 512, 00:06:59.902 "num_blocks": 65536, 00:06:59.902 "uuid": "81971cff-79e9-4e95-b8a1-d0820160049b", 00:06:59.902 "assigned_rate_limits": { 00:06:59.902 "rw_ios_per_sec": 0, 00:06:59.902 "rw_mbytes_per_sec": 0, 00:06:59.902 "r_mbytes_per_sec": 0, 00:06:59.902 "w_mbytes_per_sec": 0 00:06:59.902 }, 00:06:59.902 "claimed": true, 00:06:59.902 "claim_type": "exclusive_write", 00:06:59.902 "zoned": false, 00:06:59.902 "supported_io_types": { 00:06:59.902 "read": true, 00:06:59.902 "write": true, 00:06:59.902 "unmap": true, 00:06:59.902 "flush": true, 00:06:59.902 "reset": true, 00:06:59.902 "nvme_admin": false, 00:06:59.902 "nvme_io": false, 00:06:59.902 "nvme_io_md": false, 00:06:59.902 "write_zeroes": true, 00:06:59.902 "zcopy": true, 00:06:59.902 "get_zone_info": false, 00:06:59.902 "zone_management": false, 00:06:59.902 "zone_append": false, 00:06:59.902 "compare": false, 00:06:59.902 "compare_and_write": false, 00:06:59.902 "abort": true, 00:06:59.902 "seek_hole": false, 00:06:59.902 "seek_data": false, 00:06:59.902 "copy": true, 00:06:59.902 "nvme_iov_md": false 00:06:59.902 }, 00:06:59.902 "memory_domains": [ 00:06:59.902 { 00:06:59.902 "dma_device_id": "system", 00:06:59.902 "dma_device_type": 1 00:06:59.902 }, 00:06:59.902 { 00:06:59.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.902 "dma_device_type": 2 00:06:59.902 } 00:06:59.902 ], 00:06:59.902 "driver_specific": {} 00:06:59.902 } 00:06:59.902 ] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.902 "name": "Existed_Raid", 00:06:59.902 "uuid": "81c63123-7145-4173-b7da-cae745ee813d", 00:06:59.902 "strip_size_kb": 0, 00:06:59.902 "state": "online", 00:06:59.902 "raid_level": "raid1", 00:06:59.902 "superblock": true, 00:06:59.902 "num_base_bdevs": 2, 00:06:59.902 "num_base_bdevs_discovered": 2, 00:06:59.902 "num_base_bdevs_operational": 2, 00:06:59.902 "base_bdevs_list": [ 00:06:59.902 { 00:06:59.902 "name": "BaseBdev1", 00:06:59.902 "uuid": "4ad9195c-c698-44e9-8ca8-0d2a529f0f86", 00:06:59.902 "is_configured": true, 00:06:59.902 "data_offset": 2048, 00:06:59.902 "data_size": 63488 00:06:59.902 }, 00:06:59.902 { 00:06:59.902 "name": "BaseBdev2", 00:06:59.902 "uuid": "81971cff-79e9-4e95-b8a1-d0820160049b", 00:06:59.902 "is_configured": true, 00:06:59.902 "data_offset": 2048, 00:06:59.902 "data_size": 63488 00:06:59.902 } 00:06:59.902 ] 00:06:59.902 }' 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.902 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.163 [2024-10-30 09:41:38.661213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.163 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.163 "name": "Existed_Raid", 00:07:00.163 "aliases": [ 00:07:00.163 "81c63123-7145-4173-b7da-cae745ee813d" 00:07:00.163 ], 00:07:00.163 "product_name": "Raid Volume", 00:07:00.163 "block_size": 512, 00:07:00.163 "num_blocks": 63488, 00:07:00.163 "uuid": "81c63123-7145-4173-b7da-cae745ee813d", 00:07:00.164 "assigned_rate_limits": { 00:07:00.164 "rw_ios_per_sec": 0, 00:07:00.164 "rw_mbytes_per_sec": 0, 00:07:00.164 "r_mbytes_per_sec": 0, 00:07:00.164 "w_mbytes_per_sec": 0 00:07:00.164 }, 00:07:00.164 "claimed": false, 00:07:00.164 "zoned": false, 00:07:00.164 "supported_io_types": { 00:07:00.164 "read": true, 00:07:00.164 "write": true, 00:07:00.164 "unmap": false, 00:07:00.164 "flush": false, 00:07:00.164 "reset": true, 00:07:00.164 "nvme_admin": false, 00:07:00.164 "nvme_io": false, 00:07:00.164 "nvme_io_md": false, 00:07:00.164 "write_zeroes": true, 00:07:00.164 "zcopy": false, 00:07:00.164 "get_zone_info": false, 00:07:00.164 "zone_management": false, 00:07:00.164 "zone_append": false, 00:07:00.164 "compare": false, 00:07:00.164 "compare_and_write": false, 00:07:00.164 "abort": false, 00:07:00.164 "seek_hole": false, 00:07:00.164 "seek_data": false, 00:07:00.164 "copy": false, 00:07:00.164 "nvme_iov_md": false 00:07:00.164 }, 00:07:00.164 "memory_domains": [ 00:07:00.164 { 00:07:00.164 "dma_device_id": "system", 00:07:00.164 "dma_device_type": 1 00:07:00.164 }, 00:07:00.164 { 00:07:00.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.164 "dma_device_type": 2 00:07:00.164 }, 00:07:00.164 { 00:07:00.164 "dma_device_id": "system", 00:07:00.164 "dma_device_type": 1 00:07:00.164 }, 00:07:00.164 { 00:07:00.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.164 "dma_device_type": 2 00:07:00.164 } 00:07:00.164 ], 00:07:00.164 "driver_specific": { 00:07:00.164 "raid": { 00:07:00.164 "uuid": "81c63123-7145-4173-b7da-cae745ee813d", 00:07:00.164 "strip_size_kb": 0, 00:07:00.164 "state": "online", 00:07:00.164 "raid_level": "raid1", 00:07:00.164 "superblock": true, 00:07:00.164 "num_base_bdevs": 2, 00:07:00.164 "num_base_bdevs_discovered": 2, 00:07:00.164 "num_base_bdevs_operational": 2, 00:07:00.164 "base_bdevs_list": [ 00:07:00.164 { 00:07:00.164 "name": "BaseBdev1", 00:07:00.164 "uuid": "4ad9195c-c698-44e9-8ca8-0d2a529f0f86", 00:07:00.164 "is_configured": true, 00:07:00.164 "data_offset": 2048, 00:07:00.164 "data_size": 63488 00:07:00.164 }, 00:07:00.164 { 00:07:00.164 "name": "BaseBdev2", 00:07:00.164 "uuid": "81971cff-79e9-4e95-b8a1-d0820160049b", 00:07:00.164 "is_configured": true, 00:07:00.164 "data_offset": 2048, 00:07:00.164 "data_size": 63488 00:07:00.164 } 00:07:00.164 ] 00:07:00.164 } 00:07:00.164 } 00:07:00.164 }' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:00.164 BaseBdev2' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.164 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.427 [2024-10-30 09:41:38.820974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.427 "name": "Existed_Raid", 00:07:00.427 "uuid": "81c63123-7145-4173-b7da-cae745ee813d", 00:07:00.427 "strip_size_kb": 0, 00:07:00.427 "state": "online", 00:07:00.427 "raid_level": "raid1", 00:07:00.427 "superblock": true, 00:07:00.427 "num_base_bdevs": 2, 00:07:00.427 "num_base_bdevs_discovered": 1, 00:07:00.427 "num_base_bdevs_operational": 1, 00:07:00.427 "base_bdevs_list": [ 00:07:00.427 { 00:07:00.427 "name": null, 00:07:00.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.427 "is_configured": false, 00:07:00.427 "data_offset": 0, 00:07:00.427 "data_size": 63488 00:07:00.427 }, 00:07:00.427 { 00:07:00.427 "name": "BaseBdev2", 00:07:00.427 "uuid": "81971cff-79e9-4e95-b8a1-d0820160049b", 00:07:00.427 "is_configured": true, 00:07:00.427 "data_offset": 2048, 00:07:00.427 "data_size": 63488 00:07:00.427 } 00:07:00.427 ] 00:07:00.427 }' 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.427 09:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.689 [2024-10-30 09:41:39.229614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:00.689 [2024-10-30 09:41:39.229727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.689 [2024-10-30 09:41:39.290199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.689 [2024-10-30 09:41:39.290260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.689 [2024-10-30 09:41:39.290272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.689 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61682 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61682 ']' 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61682 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61682 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.950 killing process with pid 61682 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61682' 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61682 00:07:00.950 [2024-10-30 09:41:39.346514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.950 09:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61682 00:07:00.950 [2024-10-30 09:41:39.357128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.524 09:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:01.524 00:07:01.524 real 0m3.743s 00:07:01.524 user 0m5.424s 00:07:01.524 sys 0m0.534s 00:07:01.524 09:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.524 ************************************ 00:07:01.524 END TEST raid_state_function_test_sb 00:07:01.524 ************************************ 00:07:01.524 09:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.524 09:41:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:01.525 09:41:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:01.525 09:41:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.525 09:41:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.525 ************************************ 00:07:01.525 START TEST raid_superblock_test 00:07:01.525 ************************************ 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61918 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61918 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61918 ']' 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.525 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.798 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.798 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.798 09:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.798 09:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:01.798 [2024-10-30 09:41:40.204150] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:01.798 [2024-10-30 09:41:40.204279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61918 ] 00:07:01.798 [2024-10-30 09:41:40.366203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.060 [2024-10-30 09:41:40.469231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.060 [2024-10-30 09:41:40.605134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.060 [2024-10-30 09:41:40.605184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.633 malloc1 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.633 [2024-10-30 09:41:41.089467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:02.633 [2024-10-30 09:41:41.089523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.633 [2024-10-30 09:41:41.089544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:02.633 [2024-10-30 09:41:41.089553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.633 [2024-10-30 09:41:41.091669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.633 [2024-10-30 09:41:41.091704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:02.633 pt1 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:02.633 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 malloc2 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 [2024-10-30 09:41:41.125372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.634 [2024-10-30 09:41:41.125417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.634 [2024-10-30 09:41:41.125444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:02.634 [2024-10-30 09:41:41.125454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.634 [2024-10-30 09:41:41.127554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.634 [2024-10-30 09:41:41.127587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.634 pt2 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 [2024-10-30 09:41:41.133422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:02.634 [2024-10-30 09:41:41.135285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.634 [2024-10-30 09:41:41.135441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:02.634 [2024-10-30 09:41:41.135456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:02.634 [2024-10-30 09:41:41.135698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.634 [2024-10-30 09:41:41.135843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:02.634 [2024-10-30 09:41:41.135857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:02.634 [2024-10-30 09:41:41.135989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.634 "name": "raid_bdev1", 00:07:02.634 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:02.634 "strip_size_kb": 0, 00:07:02.634 "state": "online", 00:07:02.634 "raid_level": "raid1", 00:07:02.634 "superblock": true, 00:07:02.634 "num_base_bdevs": 2, 00:07:02.634 "num_base_bdevs_discovered": 2, 00:07:02.634 "num_base_bdevs_operational": 2, 00:07:02.634 "base_bdevs_list": [ 00:07:02.634 { 00:07:02.634 "name": "pt1", 00:07:02.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.634 "is_configured": true, 00:07:02.634 "data_offset": 2048, 00:07:02.634 "data_size": 63488 00:07:02.634 }, 00:07:02.634 { 00:07:02.634 "name": "pt2", 00:07:02.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.634 "is_configured": true, 00:07:02.634 "data_offset": 2048, 00:07:02.634 "data_size": 63488 00:07:02.634 } 00:07:02.634 ] 00:07:02.634 }' 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.634 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.895 [2024-10-30 09:41:41.465791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.895 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:02.895 "name": "raid_bdev1", 00:07:02.895 "aliases": [ 00:07:02.895 "eeea9400-856e-4490-afa3-294337cc480c" 00:07:02.895 ], 00:07:02.895 "product_name": "Raid Volume", 00:07:02.895 "block_size": 512, 00:07:02.895 "num_blocks": 63488, 00:07:02.895 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:02.895 "assigned_rate_limits": { 00:07:02.895 "rw_ios_per_sec": 0, 00:07:02.896 "rw_mbytes_per_sec": 0, 00:07:02.896 "r_mbytes_per_sec": 0, 00:07:02.896 "w_mbytes_per_sec": 0 00:07:02.896 }, 00:07:02.896 "claimed": false, 00:07:02.896 "zoned": false, 00:07:02.896 "supported_io_types": { 00:07:02.896 "read": true, 00:07:02.896 "write": true, 00:07:02.896 "unmap": false, 00:07:02.896 "flush": false, 00:07:02.896 "reset": true, 00:07:02.896 "nvme_admin": false, 00:07:02.896 "nvme_io": false, 00:07:02.896 "nvme_io_md": false, 00:07:02.896 "write_zeroes": true, 00:07:02.896 "zcopy": false, 00:07:02.896 "get_zone_info": false, 00:07:02.896 "zone_management": false, 00:07:02.896 "zone_append": false, 00:07:02.896 "compare": false, 00:07:02.896 "compare_and_write": false, 00:07:02.896 "abort": false, 00:07:02.896 "seek_hole": false, 00:07:02.896 "seek_data": false, 00:07:02.896 "copy": false, 00:07:02.896 "nvme_iov_md": false 00:07:02.896 }, 00:07:02.896 "memory_domains": [ 00:07:02.896 { 00:07:02.896 "dma_device_id": "system", 00:07:02.896 "dma_device_type": 1 00:07:02.896 }, 00:07:02.896 { 00:07:02.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.896 "dma_device_type": 2 00:07:02.896 }, 00:07:02.896 { 00:07:02.896 "dma_device_id": "system", 00:07:02.896 "dma_device_type": 1 00:07:02.896 }, 00:07:02.896 { 00:07:02.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.896 "dma_device_type": 2 00:07:02.896 } 00:07:02.896 ], 00:07:02.896 "driver_specific": { 00:07:02.896 "raid": { 00:07:02.896 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:02.896 "strip_size_kb": 0, 00:07:02.896 "state": "online", 00:07:02.896 "raid_level": "raid1", 00:07:02.896 "superblock": true, 00:07:02.896 "num_base_bdevs": 2, 00:07:02.896 "num_base_bdevs_discovered": 2, 00:07:02.896 "num_base_bdevs_operational": 2, 00:07:02.896 "base_bdevs_list": [ 00:07:02.896 { 00:07:02.896 "name": "pt1", 00:07:02.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.896 "is_configured": true, 00:07:02.896 "data_offset": 2048, 00:07:02.896 "data_size": 63488 00:07:02.896 }, 00:07:02.896 { 00:07:02.896 "name": "pt2", 00:07:02.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.896 "is_configured": true, 00:07:02.896 "data_offset": 2048, 00:07:02.896 "data_size": 63488 00:07:02.896 } 00:07:02.896 ] 00:07:02.896 } 00:07:02.896 } 00:07:02.896 }' 00:07:02.896 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.157 pt2' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 [2024-10-30 09:41:41.621829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eeea9400-856e-4490-afa3-294337cc480c 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eeea9400-856e-4490-afa3-294337cc480c ']' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 [2024-10-30 09:41:41.657519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.157 [2024-10-30 09:41:41.657546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.157 [2024-10-30 09:41:41.657623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.157 [2024-10-30 09:41:41.657683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.157 [2024-10-30 09:41:41.657702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.158 [2024-10-30 09:41:41.753556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:03.158 [2024-10-30 09:41:41.755436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:03.158 [2024-10-30 09:41:41.755508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:03.158 [2024-10-30 09:41:41.755556] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:03.158 [2024-10-30 09:41:41.755571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.158 [2024-10-30 09:41:41.755582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:03.158 request: 00:07:03.158 { 00:07:03.158 "name": "raid_bdev1", 00:07:03.158 "raid_level": "raid1", 00:07:03.158 "base_bdevs": [ 00:07:03.158 "malloc1", 00:07:03.158 "malloc2" 00:07:03.158 ], 00:07:03.158 "superblock": false, 00:07:03.158 "method": "bdev_raid_create", 00:07:03.158 "req_id": 1 00:07:03.158 } 00:07:03.158 Got JSON-RPC error response 00:07:03.158 response: 00:07:03.158 { 00:07:03.158 "code": -17, 00:07:03.158 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:03.158 } 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:03.158 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.420 [2024-10-30 09:41:41.801562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:03.420 [2024-10-30 09:41:41.801615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.420 [2024-10-30 09:41:41.801634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:03.420 [2024-10-30 09:41:41.801645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.420 [2024-10-30 09:41:41.803988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.420 [2024-10-30 09:41:41.804035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:03.420 [2024-10-30 09:41:41.804140] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:03.420 [2024-10-30 09:41:41.804197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:03.420 pt1 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.420 "name": "raid_bdev1", 00:07:03.420 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:03.420 "strip_size_kb": 0, 00:07:03.420 "state": "configuring", 00:07:03.420 "raid_level": "raid1", 00:07:03.420 "superblock": true, 00:07:03.420 "num_base_bdevs": 2, 00:07:03.420 "num_base_bdevs_discovered": 1, 00:07:03.420 "num_base_bdevs_operational": 2, 00:07:03.420 "base_bdevs_list": [ 00:07:03.420 { 00:07:03.420 "name": "pt1", 00:07:03.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.420 "is_configured": true, 00:07:03.420 "data_offset": 2048, 00:07:03.420 "data_size": 63488 00:07:03.420 }, 00:07:03.420 { 00:07:03.420 "name": null, 00:07:03.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.420 "is_configured": false, 00:07:03.420 "data_offset": 2048, 00:07:03.420 "data_size": 63488 00:07:03.420 } 00:07:03.420 ] 00:07:03.420 }' 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.420 09:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.682 [2024-10-30 09:41:42.129652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:03.682 [2024-10-30 09:41:42.129713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.682 [2024-10-30 09:41:42.129731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:03.682 [2024-10-30 09:41:42.129743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.682 [2024-10-30 09:41:42.130173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.682 [2024-10-30 09:41:42.130202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:03.682 [2024-10-30 09:41:42.130272] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:03.682 [2024-10-30 09:41:42.130294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:03.682 [2024-10-30 09:41:42.130399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.682 [2024-10-30 09:41:42.130410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:03.682 [2024-10-30 09:41:42.130640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:03.682 [2024-10-30 09:41:42.130779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.682 [2024-10-30 09:41:42.130788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:03.682 [2024-10-30 09:41:42.130916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.682 pt2 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.682 "name": "raid_bdev1", 00:07:03.682 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:03.682 "strip_size_kb": 0, 00:07:03.682 "state": "online", 00:07:03.682 "raid_level": "raid1", 00:07:03.682 "superblock": true, 00:07:03.682 "num_base_bdevs": 2, 00:07:03.682 "num_base_bdevs_discovered": 2, 00:07:03.682 "num_base_bdevs_operational": 2, 00:07:03.682 "base_bdevs_list": [ 00:07:03.682 { 00:07:03.682 "name": "pt1", 00:07:03.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.682 "is_configured": true, 00:07:03.682 "data_offset": 2048, 00:07:03.682 "data_size": 63488 00:07:03.682 }, 00:07:03.682 { 00:07:03.682 "name": "pt2", 00:07:03.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.682 "is_configured": true, 00:07:03.682 "data_offset": 2048, 00:07:03.682 "data_size": 63488 00:07:03.682 } 00:07:03.682 ] 00:07:03.682 }' 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.682 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.943 [2024-10-30 09:41:42.466005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.943 "name": "raid_bdev1", 00:07:03.943 "aliases": [ 00:07:03.943 "eeea9400-856e-4490-afa3-294337cc480c" 00:07:03.943 ], 00:07:03.943 "product_name": "Raid Volume", 00:07:03.943 "block_size": 512, 00:07:03.943 "num_blocks": 63488, 00:07:03.943 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:03.943 "assigned_rate_limits": { 00:07:03.943 "rw_ios_per_sec": 0, 00:07:03.943 "rw_mbytes_per_sec": 0, 00:07:03.943 "r_mbytes_per_sec": 0, 00:07:03.943 "w_mbytes_per_sec": 0 00:07:03.943 }, 00:07:03.943 "claimed": false, 00:07:03.943 "zoned": false, 00:07:03.943 "supported_io_types": { 00:07:03.943 "read": true, 00:07:03.943 "write": true, 00:07:03.943 "unmap": false, 00:07:03.943 "flush": false, 00:07:03.943 "reset": true, 00:07:03.943 "nvme_admin": false, 00:07:03.943 "nvme_io": false, 00:07:03.943 "nvme_io_md": false, 00:07:03.943 "write_zeroes": true, 00:07:03.943 "zcopy": false, 00:07:03.943 "get_zone_info": false, 00:07:03.943 "zone_management": false, 00:07:03.943 "zone_append": false, 00:07:03.943 "compare": false, 00:07:03.943 "compare_and_write": false, 00:07:03.943 "abort": false, 00:07:03.943 "seek_hole": false, 00:07:03.943 "seek_data": false, 00:07:03.943 "copy": false, 00:07:03.943 "nvme_iov_md": false 00:07:03.943 }, 00:07:03.943 "memory_domains": [ 00:07:03.943 { 00:07:03.943 "dma_device_id": "system", 00:07:03.943 "dma_device_type": 1 00:07:03.943 }, 00:07:03.943 { 00:07:03.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.943 "dma_device_type": 2 00:07:03.943 }, 00:07:03.943 { 00:07:03.943 "dma_device_id": "system", 00:07:03.943 "dma_device_type": 1 00:07:03.943 }, 00:07:03.943 { 00:07:03.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.943 "dma_device_type": 2 00:07:03.943 } 00:07:03.943 ], 00:07:03.943 "driver_specific": { 00:07:03.943 "raid": { 00:07:03.943 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:03.943 "strip_size_kb": 0, 00:07:03.943 "state": "online", 00:07:03.943 "raid_level": "raid1", 00:07:03.943 "superblock": true, 00:07:03.943 "num_base_bdevs": 2, 00:07:03.943 "num_base_bdevs_discovered": 2, 00:07:03.943 "num_base_bdevs_operational": 2, 00:07:03.943 "base_bdevs_list": [ 00:07:03.943 { 00:07:03.943 "name": "pt1", 00:07:03.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.943 "is_configured": true, 00:07:03.943 "data_offset": 2048, 00:07:03.943 "data_size": 63488 00:07:03.943 }, 00:07:03.943 { 00:07:03.943 "name": "pt2", 00:07:03.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.943 "is_configured": true, 00:07:03.943 "data_offset": 2048, 00:07:03.943 "data_size": 63488 00:07:03.943 } 00:07:03.943 ] 00:07:03.943 } 00:07:03.943 } 00:07:03.943 }' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.943 pt2' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.943 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.204 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.205 [2024-10-30 09:41:42.630029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eeea9400-856e-4490-afa3-294337cc480c '!=' eeea9400-856e-4490-afa3-294337cc480c ']' 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.205 [2024-10-30 09:41:42.665795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.205 "name": "raid_bdev1", 00:07:04.205 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:04.205 "strip_size_kb": 0, 00:07:04.205 "state": "online", 00:07:04.205 "raid_level": "raid1", 00:07:04.205 "superblock": true, 00:07:04.205 "num_base_bdevs": 2, 00:07:04.205 "num_base_bdevs_discovered": 1, 00:07:04.205 "num_base_bdevs_operational": 1, 00:07:04.205 "base_bdevs_list": [ 00:07:04.205 { 00:07:04.205 "name": null, 00:07:04.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.205 "is_configured": false, 00:07:04.205 "data_offset": 0, 00:07:04.205 "data_size": 63488 00:07:04.205 }, 00:07:04.205 { 00:07:04.205 "name": "pt2", 00:07:04.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.205 "is_configured": true, 00:07:04.205 "data_offset": 2048, 00:07:04.205 "data_size": 63488 00:07:04.205 } 00:07:04.205 ] 00:07:04.205 }' 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.205 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 [2024-10-30 09:41:42.985860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.550 [2024-10-30 09:41:42.985890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.550 [2024-10-30 09:41:42.985958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.550 [2024-10-30 09:41:42.986004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.550 [2024-10-30 09:41:42.986016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.550 09:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 [2024-10-30 09:41:43.037853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:04.550 [2024-10-30 09:41:43.037909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.550 [2024-10-30 09:41:43.037923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:04.550 [2024-10-30 09:41:43.037934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.550 [2024-10-30 09:41:43.040119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.550 [2024-10-30 09:41:43.040158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:04.550 [2024-10-30 09:41:43.040230] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:04.550 [2024-10-30 09:41:43.040270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:04.550 [2024-10-30 09:41:43.040374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:04.550 [2024-10-30 09:41:43.040388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:04.550 [2024-10-30 09:41:43.040623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:04.550 [2024-10-30 09:41:43.040765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:04.550 [2024-10-30 09:41:43.040780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:04.550 [2024-10-30 09:41:43.040908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.550 pt2 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.550 "name": "raid_bdev1", 00:07:04.550 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:04.550 "strip_size_kb": 0, 00:07:04.550 "state": "online", 00:07:04.550 "raid_level": "raid1", 00:07:04.550 "superblock": true, 00:07:04.550 "num_base_bdevs": 2, 00:07:04.550 "num_base_bdevs_discovered": 1, 00:07:04.550 "num_base_bdevs_operational": 1, 00:07:04.550 "base_bdevs_list": [ 00:07:04.550 { 00:07:04.550 "name": null, 00:07:04.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.550 "is_configured": false, 00:07:04.550 "data_offset": 2048, 00:07:04.550 "data_size": 63488 00:07:04.550 }, 00:07:04.550 { 00:07:04.550 "name": "pt2", 00:07:04.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.550 "is_configured": true, 00:07:04.550 "data_offset": 2048, 00:07:04.550 "data_size": 63488 00:07:04.550 } 00:07:04.550 ] 00:07:04.550 }' 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.550 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.812 [2024-10-30 09:41:43.361922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.812 [2024-10-30 09:41:43.361953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.812 [2024-10-30 09:41:43.362015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.812 [2024-10-30 09:41:43.362075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.812 [2024-10-30 09:41:43.362085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.812 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.813 [2024-10-30 09:41:43.405947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:04.813 [2024-10-30 09:41:43.406001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.813 [2024-10-30 09:41:43.406019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:04.813 [2024-10-30 09:41:43.406027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.813 [2024-10-30 09:41:43.408232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.813 [2024-10-30 09:41:43.408266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:04.813 [2024-10-30 09:41:43.408341] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:04.813 [2024-10-30 09:41:43.408399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:04.813 [2024-10-30 09:41:43.408522] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:04.813 [2024-10-30 09:41:43.408532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.813 [2024-10-30 09:41:43.408548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:04.813 [2024-10-30 09:41:43.408595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:04.813 [2024-10-30 09:41:43.408662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:04.813 [2024-10-30 09:41:43.408672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:04.813 [2024-10-30 09:41:43.408920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:04.813 [2024-10-30 09:41:43.409078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:04.813 [2024-10-30 09:41:43.409092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:04.813 [2024-10-30 09:41:43.409228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.813 pt1 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.813 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.071 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.071 "name": "raid_bdev1", 00:07:05.071 "uuid": "eeea9400-856e-4490-afa3-294337cc480c", 00:07:05.071 "strip_size_kb": 0, 00:07:05.071 "state": "online", 00:07:05.071 "raid_level": "raid1", 00:07:05.071 "superblock": true, 00:07:05.071 "num_base_bdevs": 2, 00:07:05.071 "num_base_bdevs_discovered": 1, 00:07:05.071 "num_base_bdevs_operational": 1, 00:07:05.071 "base_bdevs_list": [ 00:07:05.071 { 00:07:05.071 "name": null, 00:07:05.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.071 "is_configured": false, 00:07:05.071 "data_offset": 2048, 00:07:05.071 "data_size": 63488 00:07:05.071 }, 00:07:05.071 { 00:07:05.071 "name": "pt2", 00:07:05.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.071 "is_configured": true, 00:07:05.071 "data_offset": 2048, 00:07:05.071 "data_size": 63488 00:07:05.071 } 00:07:05.071 ] 00:07:05.071 }' 00:07:05.072 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.072 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.332 [2024-10-30 09:41:43.758281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eeea9400-856e-4490-afa3-294337cc480c '!=' eeea9400-856e-4490-afa3-294337cc480c ']' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61918 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61918 ']' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61918 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61918 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.332 killing process with pid 61918 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61918' 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61918 00:07:05.332 09:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61918 00:07:05.332 [2024-10-30 09:41:43.800677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.332 [2024-10-30 09:41:43.800757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.332 [2024-10-30 09:41:43.800802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.332 [2024-10-30 09:41:43.800823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:05.332 [2024-10-30 09:41:43.930836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.273 09:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:06.273 00:07:06.273 real 0m4.475s 00:07:06.273 user 0m6.786s 00:07:06.273 sys 0m0.701s 00:07:06.273 09:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.273 09:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 ************************************ 00:07:06.273 END TEST raid_superblock_test 00:07:06.273 ************************************ 00:07:06.273 09:41:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:06.273 09:41:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:06.273 09:41:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.273 09:41:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 ************************************ 00:07:06.273 START TEST raid_read_error_test 00:07:06.273 ************************************ 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kzuwWuXJuz 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62233 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62233 00:07:06.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62233 ']' 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 09:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:06.273 [2024-10-30 09:41:44.754745] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:06.273 [2024-10-30 09:41:44.754873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62233 ] 00:07:06.532 [2024-10-30 09:41:44.910698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.532 [2024-10-30 09:41:45.011162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.532 [2024-10-30 09:41:45.147485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.532 [2024-10-30 09:41:45.147514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.134 BaseBdev1_malloc 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.134 true 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.134 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 [2024-10-30 09:41:45.637778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:07.135 [2024-10-30 09:41:45.637832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.135 [2024-10-30 09:41:45.637853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:07.135 [2024-10-30 09:41:45.637865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.135 [2024-10-30 09:41:45.639993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.135 [2024-10-30 09:41:45.640029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:07.135 BaseBdev1 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 BaseBdev2_malloc 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 true 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 [2024-10-30 09:41:45.682882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:07.135 [2024-10-30 09:41:45.682944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.135 [2024-10-30 09:41:45.682963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:07.135 [2024-10-30 09:41:45.682974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.135 [2024-10-30 09:41:45.685116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.135 [2024-10-30 09:41:45.685152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:07.135 BaseBdev2 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 [2024-10-30 09:41:45.690941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.135 [2024-10-30 09:41:45.692781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.135 [2024-10-30 09:41:45.692977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:07.135 [2024-10-30 09:41:45.692992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:07.135 [2024-10-30 09:41:45.693244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.135 [2024-10-30 09:41:45.693406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:07.135 [2024-10-30 09:41:45.693415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:07.135 [2024-10-30 09:41:45.693561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.135 "name": "raid_bdev1", 00:07:07.135 "uuid": "c175a38b-bfcf-4830-8541-3a8187a9999f", 00:07:07.135 "strip_size_kb": 0, 00:07:07.135 "state": "online", 00:07:07.135 "raid_level": "raid1", 00:07:07.135 "superblock": true, 00:07:07.135 "num_base_bdevs": 2, 00:07:07.135 "num_base_bdevs_discovered": 2, 00:07:07.135 "num_base_bdevs_operational": 2, 00:07:07.135 "base_bdevs_list": [ 00:07:07.135 { 00:07:07.135 "name": "BaseBdev1", 00:07:07.135 "uuid": "0c3cbc73-c2c2-502e-95bb-1ebf8c3c8ac3", 00:07:07.135 "is_configured": true, 00:07:07.135 "data_offset": 2048, 00:07:07.135 "data_size": 63488 00:07:07.135 }, 00:07:07.135 { 00:07:07.135 "name": "BaseBdev2", 00:07:07.135 "uuid": "2ab6a0cc-d491-5082-9647-55cde4f2e4a8", 00:07:07.135 "is_configured": true, 00:07:07.135 "data_offset": 2048, 00:07:07.135 "data_size": 63488 00:07:07.135 } 00:07:07.135 ] 00:07:07.135 }' 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.135 09:41:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.405 09:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:07.405 09:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:07.667 [2024-10-30 09:41:46.083975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.613 "name": "raid_bdev1", 00:07:08.613 "uuid": "c175a38b-bfcf-4830-8541-3a8187a9999f", 00:07:08.613 "strip_size_kb": 0, 00:07:08.613 "state": "online", 00:07:08.613 "raid_level": "raid1", 00:07:08.613 "superblock": true, 00:07:08.613 "num_base_bdevs": 2, 00:07:08.613 "num_base_bdevs_discovered": 2, 00:07:08.613 "num_base_bdevs_operational": 2, 00:07:08.613 "base_bdevs_list": [ 00:07:08.613 { 00:07:08.613 "name": "BaseBdev1", 00:07:08.613 "uuid": "0c3cbc73-c2c2-502e-95bb-1ebf8c3c8ac3", 00:07:08.613 "is_configured": true, 00:07:08.613 "data_offset": 2048, 00:07:08.613 "data_size": 63488 00:07:08.613 }, 00:07:08.613 { 00:07:08.613 "name": "BaseBdev2", 00:07:08.613 "uuid": "2ab6a0cc-d491-5082-9647-55cde4f2e4a8", 00:07:08.613 "is_configured": true, 00:07:08.613 "data_offset": 2048, 00:07:08.613 "data_size": 63488 00:07:08.613 } 00:07:08.613 ] 00:07:08.613 }' 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.613 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.874 [2024-10-30 09:41:47.325438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.874 [2024-10-30 09:41:47.325471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.874 [2024-10-30 09:41:47.328507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.874 [2024-10-30 09:41:47.328551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.874 [2024-10-30 09:41:47.328636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.874 [2024-10-30 09:41:47.328649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:08.874 { 00:07:08.874 "results": [ 00:07:08.874 { 00:07:08.874 "job": "raid_bdev1", 00:07:08.874 "core_mask": "0x1", 00:07:08.874 "workload": "randrw", 00:07:08.874 "percentage": 50, 00:07:08.874 "status": "finished", 00:07:08.874 "queue_depth": 1, 00:07:08.874 "io_size": 131072, 00:07:08.874 "runtime": 1.239595, 00:07:08.874 "iops": 18541.53977710462, 00:07:08.874 "mibps": 2317.6924721380774, 00:07:08.874 "io_failed": 0, 00:07:08.874 "io_timeout": 0, 00:07:08.874 "avg_latency_us": 50.82467321748909, 00:07:08.874 "min_latency_us": 29.341538461538462, 00:07:08.874 "max_latency_us": 1688.8123076923077 00:07:08.874 } 00:07:08.874 ], 00:07:08.874 "core_count": 1 00:07:08.874 } 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62233 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62233 ']' 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62233 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62233 00:07:08.874 killing process with pid 62233 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62233' 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62233 00:07:08.874 [2024-10-30 09:41:47.358524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.874 09:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62233 00:07:08.874 [2024-10-30 09:41:47.443395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kzuwWuXJuz 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:09.816 00:07:09.816 real 0m3.517s 00:07:09.816 user 0m4.226s 00:07:09.816 sys 0m0.345s 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.816 09:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.816 ************************************ 00:07:09.816 END TEST raid_read_error_test 00:07:09.816 ************************************ 00:07:09.816 09:41:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:09.816 09:41:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:09.816 09:41:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.816 09:41:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.816 ************************************ 00:07:09.816 START TEST raid_write_error_test 00:07:09.816 ************************************ 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WkxrBbzeWl 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62368 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62368 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62368 ']' 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.816 09:41:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:09.816 [2024-10-30 09:41:48.330688] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:09.816 [2024-10-30 09:41:48.330939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62368 ] 00:07:10.075 [2024-10-30 09:41:48.488045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.075 [2024-10-30 09:41:48.588307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.336 [2024-10-30 09:41:48.725042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.336 [2024-10-30 09:41:48.725247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.596 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 BaseBdev1_malloc 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 true 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.857 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 [2024-10-30 09:41:49.233904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:10.857 [2024-10-30 09:41:49.233955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.857 [2024-10-30 09:41:49.233974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:10.857 [2024-10-30 09:41:49.233985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.857 [2024-10-30 09:41:49.236125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.858 [2024-10-30 09:41:49.236256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:10.858 BaseBdev1 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.858 BaseBdev2_malloc 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.858 true 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.858 [2024-10-30 09:41:49.277669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:10.858 [2024-10-30 09:41:49.277716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.858 [2024-10-30 09:41:49.277733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:10.858 [2024-10-30 09:41:49.277744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.858 [2024-10-30 09:41:49.279833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.858 [2024-10-30 09:41:49.279967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:10.858 BaseBdev2 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.858 [2024-10-30 09:41:49.285723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.858 [2024-10-30 09:41:49.287576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.858 [2024-10-30 09:41:49.287764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.858 [2024-10-30 09:41:49.287777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:10.858 [2024-10-30 09:41:49.288019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.858 [2024-10-30 09:41:49.288192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.858 [2024-10-30 09:41:49.288202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:10.858 [2024-10-30 09:41:49.288337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.858 "name": "raid_bdev1", 00:07:10.858 "uuid": "f9385bf5-2022-4de5-906e-4deb55120165", 00:07:10.858 "strip_size_kb": 0, 00:07:10.858 "state": "online", 00:07:10.858 "raid_level": "raid1", 00:07:10.858 "superblock": true, 00:07:10.858 "num_base_bdevs": 2, 00:07:10.858 "num_base_bdevs_discovered": 2, 00:07:10.858 "num_base_bdevs_operational": 2, 00:07:10.858 "base_bdevs_list": [ 00:07:10.858 { 00:07:10.858 "name": "BaseBdev1", 00:07:10.858 "uuid": "37fd6e27-e0ba-5156-9da1-7d559b221d7a", 00:07:10.858 "is_configured": true, 00:07:10.858 "data_offset": 2048, 00:07:10.858 "data_size": 63488 00:07:10.858 }, 00:07:10.858 { 00:07:10.858 "name": "BaseBdev2", 00:07:10.858 "uuid": "8e182b95-8d0f-5c4c-8f05-df38a85c685f", 00:07:10.858 "is_configured": true, 00:07:10.858 "data_offset": 2048, 00:07:10.858 "data_size": 63488 00:07:10.858 } 00:07:10.858 ] 00:07:10.858 }' 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.858 09:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.119 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:11.119 09:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.119 [2024-10-30 09:41:49.678749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.062 [2024-10-30 09:41:50.596582] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:12.062 [2024-10-30 09:41:50.596633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.062 [2024-10-30 09:41:50.596820] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.062 "name": "raid_bdev1", 00:07:12.062 "uuid": "f9385bf5-2022-4de5-906e-4deb55120165", 00:07:12.062 "strip_size_kb": 0, 00:07:12.062 "state": "online", 00:07:12.062 "raid_level": "raid1", 00:07:12.062 "superblock": true, 00:07:12.062 "num_base_bdevs": 2, 00:07:12.062 "num_base_bdevs_discovered": 1, 00:07:12.062 "num_base_bdevs_operational": 1, 00:07:12.062 "base_bdevs_list": [ 00:07:12.062 { 00:07:12.062 "name": null, 00:07:12.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.062 "is_configured": false, 00:07:12.062 "data_offset": 0, 00:07:12.062 "data_size": 63488 00:07:12.062 }, 00:07:12.062 { 00:07:12.062 "name": "BaseBdev2", 00:07:12.062 "uuid": "8e182b95-8d0f-5c4c-8f05-df38a85c685f", 00:07:12.062 "is_configured": true, 00:07:12.062 "data_offset": 2048, 00:07:12.062 "data_size": 63488 00:07:12.062 } 00:07:12.062 ] 00:07:12.062 }' 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.062 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.324 [2024-10-30 09:41:50.933786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.324 [2024-10-30 09:41:50.933812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.324 [2024-10-30 09:41:50.936839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.324 { 00:07:12.324 "results": [ 00:07:12.324 { 00:07:12.324 "job": "raid_bdev1", 00:07:12.324 "core_mask": "0x1", 00:07:12.324 "workload": "randrw", 00:07:12.324 "percentage": 50, 00:07:12.324 "status": "finished", 00:07:12.324 "queue_depth": 1, 00:07:12.324 "io_size": 131072, 00:07:12.324 "runtime": 1.253172, 00:07:12.324 "iops": 20916.52223318108, 00:07:12.324 "mibps": 2614.565279147635, 00:07:12.324 "io_failed": 0, 00:07:12.324 "io_timeout": 0, 00:07:12.324 "avg_latency_us": 44.67181443613612, 00:07:12.324 "min_latency_us": 28.356923076923078, 00:07:12.324 "max_latency_us": 1663.6061538461538 00:07:12.324 } 00:07:12.324 ], 00:07:12.324 "core_count": 1 00:07:12.324 } 00:07:12.324 [2024-10-30 09:41:50.936964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.324 [2024-10-30 09:41:50.937038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.324 [2024-10-30 09:41:50.937049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62368 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62368 ']' 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62368 00:07:12.324 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62368 00:07:12.587 killing process with pid 62368 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62368' 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62368 00:07:12.587 [2024-10-30 09:41:50.963657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.587 09:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62368 00:07:12.587 [2024-10-30 09:41:51.047031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WkxrBbzeWl 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:13.588 00:07:13.588 real 0m3.531s 00:07:13.588 user 0m4.230s 00:07:13.588 sys 0m0.386s 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.588 ************************************ 00:07:13.588 END TEST raid_write_error_test 00:07:13.588 ************************************ 00:07:13.588 09:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 09:41:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:13.588 09:41:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:13.588 09:41:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:13.588 09:41:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:13.588 09:41:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.588 09:41:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 ************************************ 00:07:13.588 START TEST raid_state_function_test 00:07:13.588 ************************************ 00:07:13.588 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:07:13.588 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:13.588 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62495 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62495' 00:07:13.589 Process raid pid: 62495 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62495 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62495 ']' 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.589 09:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.589 [2024-10-30 09:41:51.931661] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:13.589 [2024-10-30 09:41:51.931778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.589 [2024-10-30 09:41:52.093108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.589 [2024-10-30 09:41:52.193186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.850 [2024-10-30 09:41:52.330986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.850 [2024-10-30 09:41:52.331028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.425 [2024-10-30 09:41:52.772125] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.425 [2024-10-30 09:41:52.772171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.425 [2024-10-30 09:41:52.772181] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.425 [2024-10-30 09:41:52.772190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.425 [2024-10-30 09:41:52.772197] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:14.425 [2024-10-30 09:41:52.772206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.425 "name": "Existed_Raid", 00:07:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.425 "strip_size_kb": 64, 00:07:14.425 "state": "configuring", 00:07:14.425 "raid_level": "raid0", 00:07:14.425 "superblock": false, 00:07:14.425 "num_base_bdevs": 3, 00:07:14.425 "num_base_bdevs_discovered": 0, 00:07:14.425 "num_base_bdevs_operational": 3, 00:07:14.425 "base_bdevs_list": [ 00:07:14.425 { 00:07:14.425 "name": "BaseBdev1", 00:07:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.425 "is_configured": false, 00:07:14.425 "data_offset": 0, 00:07:14.425 "data_size": 0 00:07:14.425 }, 00:07:14.425 { 00:07:14.425 "name": "BaseBdev2", 00:07:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.425 "is_configured": false, 00:07:14.425 "data_offset": 0, 00:07:14.425 "data_size": 0 00:07:14.425 }, 00:07:14.425 { 00:07:14.425 "name": "BaseBdev3", 00:07:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.425 "is_configured": false, 00:07:14.425 "data_offset": 0, 00:07:14.425 "data_size": 0 00:07:14.425 } 00:07:14.425 ] 00:07:14.425 }' 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.425 09:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 [2024-10-30 09:41:53.108162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.688 [2024-10-30 09:41:53.108198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 [2024-10-30 09:41:53.116160] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.688 [2024-10-30 09:41:53.116199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.688 [2024-10-30 09:41:53.116208] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.688 [2024-10-30 09:41:53.116216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.688 [2024-10-30 09:41:53.116222] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:14.688 [2024-10-30 09:41:53.116231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 [2024-10-30 09:41:53.148667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.688 BaseBdev1 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 [ 00:07:14.688 { 00:07:14.688 "name": "BaseBdev1", 00:07:14.688 "aliases": [ 00:07:14.688 "62a20790-5662-4db8-944d-44a26d522f99" 00:07:14.688 ], 00:07:14.688 "product_name": "Malloc disk", 00:07:14.688 "block_size": 512, 00:07:14.688 "num_blocks": 65536, 00:07:14.688 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:14.688 "assigned_rate_limits": { 00:07:14.688 "rw_ios_per_sec": 0, 00:07:14.688 "rw_mbytes_per_sec": 0, 00:07:14.688 "r_mbytes_per_sec": 0, 00:07:14.688 "w_mbytes_per_sec": 0 00:07:14.688 }, 00:07:14.688 "claimed": true, 00:07:14.688 "claim_type": "exclusive_write", 00:07:14.688 "zoned": false, 00:07:14.688 "supported_io_types": { 00:07:14.688 "read": true, 00:07:14.688 "write": true, 00:07:14.688 "unmap": true, 00:07:14.688 "flush": true, 00:07:14.688 "reset": true, 00:07:14.688 "nvme_admin": false, 00:07:14.688 "nvme_io": false, 00:07:14.688 "nvme_io_md": false, 00:07:14.688 "write_zeroes": true, 00:07:14.688 "zcopy": true, 00:07:14.688 "get_zone_info": false, 00:07:14.688 "zone_management": false, 00:07:14.688 "zone_append": false, 00:07:14.688 "compare": false, 00:07:14.688 "compare_and_write": false, 00:07:14.688 "abort": true, 00:07:14.688 "seek_hole": false, 00:07:14.688 "seek_data": false, 00:07:14.688 "copy": true, 00:07:14.688 "nvme_iov_md": false 00:07:14.688 }, 00:07:14.688 "memory_domains": [ 00:07:14.688 { 00:07:14.688 "dma_device_id": "system", 00:07:14.688 "dma_device_type": 1 00:07:14.688 }, 00:07:14.688 { 00:07:14.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.688 "dma_device_type": 2 00:07:14.688 } 00:07:14.688 ], 00:07:14.688 "driver_specific": {} 00:07:14.688 } 00:07:14.688 ] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.688 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.689 "name": "Existed_Raid", 00:07:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.689 "strip_size_kb": 64, 00:07:14.689 "state": "configuring", 00:07:14.689 "raid_level": "raid0", 00:07:14.689 "superblock": false, 00:07:14.689 "num_base_bdevs": 3, 00:07:14.689 "num_base_bdevs_discovered": 1, 00:07:14.689 "num_base_bdevs_operational": 3, 00:07:14.689 "base_bdevs_list": [ 00:07:14.689 { 00:07:14.689 "name": "BaseBdev1", 00:07:14.689 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:14.689 "is_configured": true, 00:07:14.689 "data_offset": 0, 00:07:14.689 "data_size": 65536 00:07:14.689 }, 00:07:14.689 { 00:07:14.689 "name": "BaseBdev2", 00:07:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.689 "is_configured": false, 00:07:14.689 "data_offset": 0, 00:07:14.689 "data_size": 0 00:07:14.689 }, 00:07:14.689 { 00:07:14.689 "name": "BaseBdev3", 00:07:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.689 "is_configured": false, 00:07:14.689 "data_offset": 0, 00:07:14.689 "data_size": 0 00:07:14.689 } 00:07:14.689 ] 00:07:14.689 }' 00:07:14.689 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.689 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 [2024-10-30 09:41:53.504791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.951 [2024-10-30 09:41:53.504835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 [2024-10-30 09:41:53.512842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.951 [2024-10-30 09:41:53.514682] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.951 [2024-10-30 09:41:53.514722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.951 [2024-10-30 09:41:53.514731] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:14.951 [2024-10-30 09:41:53.514740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.951 "name": "Existed_Raid", 00:07:14.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.951 "strip_size_kb": 64, 00:07:14.951 "state": "configuring", 00:07:14.951 "raid_level": "raid0", 00:07:14.951 "superblock": false, 00:07:14.951 "num_base_bdevs": 3, 00:07:14.951 "num_base_bdevs_discovered": 1, 00:07:14.951 "num_base_bdevs_operational": 3, 00:07:14.951 "base_bdevs_list": [ 00:07:14.951 { 00:07:14.951 "name": "BaseBdev1", 00:07:14.951 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:14.951 "is_configured": true, 00:07:14.951 "data_offset": 0, 00:07:14.951 "data_size": 65536 00:07:14.951 }, 00:07:14.951 { 00:07:14.951 "name": "BaseBdev2", 00:07:14.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.951 "is_configured": false, 00:07:14.951 "data_offset": 0, 00:07:14.951 "data_size": 0 00:07:14.951 }, 00:07:14.951 { 00:07:14.951 "name": "BaseBdev3", 00:07:14.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.951 "is_configured": false, 00:07:14.951 "data_offset": 0, 00:07:14.951 "data_size": 0 00:07:14.951 } 00:07:14.951 ] 00:07:14.951 }' 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.951 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.213 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.213 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.213 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.475 [2024-10-30 09:41:53.855558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.475 BaseBdev2 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.475 [ 00:07:15.475 { 00:07:15.475 "name": "BaseBdev2", 00:07:15.475 "aliases": [ 00:07:15.475 "a05ef386-936e-46b9-94f4-bb87fce198d9" 00:07:15.475 ], 00:07:15.475 "product_name": "Malloc disk", 00:07:15.475 "block_size": 512, 00:07:15.475 "num_blocks": 65536, 00:07:15.475 "uuid": "a05ef386-936e-46b9-94f4-bb87fce198d9", 00:07:15.475 "assigned_rate_limits": { 00:07:15.475 "rw_ios_per_sec": 0, 00:07:15.475 "rw_mbytes_per_sec": 0, 00:07:15.475 "r_mbytes_per_sec": 0, 00:07:15.475 "w_mbytes_per_sec": 0 00:07:15.475 }, 00:07:15.475 "claimed": true, 00:07:15.475 "claim_type": "exclusive_write", 00:07:15.475 "zoned": false, 00:07:15.475 "supported_io_types": { 00:07:15.475 "read": true, 00:07:15.475 "write": true, 00:07:15.475 "unmap": true, 00:07:15.475 "flush": true, 00:07:15.475 "reset": true, 00:07:15.475 "nvme_admin": false, 00:07:15.475 "nvme_io": false, 00:07:15.475 "nvme_io_md": false, 00:07:15.475 "write_zeroes": true, 00:07:15.475 "zcopy": true, 00:07:15.475 "get_zone_info": false, 00:07:15.475 "zone_management": false, 00:07:15.475 "zone_append": false, 00:07:15.475 "compare": false, 00:07:15.475 "compare_and_write": false, 00:07:15.475 "abort": true, 00:07:15.475 "seek_hole": false, 00:07:15.475 "seek_data": false, 00:07:15.475 "copy": true, 00:07:15.475 "nvme_iov_md": false 00:07:15.475 }, 00:07:15.475 "memory_domains": [ 00:07:15.475 { 00:07:15.475 "dma_device_id": "system", 00:07:15.475 "dma_device_type": 1 00:07:15.475 }, 00:07:15.475 { 00:07:15.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.475 "dma_device_type": 2 00:07:15.475 } 00:07:15.475 ], 00:07:15.475 "driver_specific": {} 00:07:15.475 } 00:07:15.475 ] 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:15.475 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.476 "name": "Existed_Raid", 00:07:15.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.476 "strip_size_kb": 64, 00:07:15.476 "state": "configuring", 00:07:15.476 "raid_level": "raid0", 00:07:15.476 "superblock": false, 00:07:15.476 "num_base_bdevs": 3, 00:07:15.476 "num_base_bdevs_discovered": 2, 00:07:15.476 "num_base_bdevs_operational": 3, 00:07:15.476 "base_bdevs_list": [ 00:07:15.476 { 00:07:15.476 "name": "BaseBdev1", 00:07:15.476 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:15.476 "is_configured": true, 00:07:15.476 "data_offset": 0, 00:07:15.476 "data_size": 65536 00:07:15.476 }, 00:07:15.476 { 00:07:15.476 "name": "BaseBdev2", 00:07:15.476 "uuid": "a05ef386-936e-46b9-94f4-bb87fce198d9", 00:07:15.476 "is_configured": true, 00:07:15.476 "data_offset": 0, 00:07:15.476 "data_size": 65536 00:07:15.476 }, 00:07:15.476 { 00:07:15.476 "name": "BaseBdev3", 00:07:15.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.476 "is_configured": false, 00:07:15.476 "data_offset": 0, 00:07:15.476 "data_size": 0 00:07:15.476 } 00:07:15.476 ] 00:07:15.476 }' 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.476 09:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 [2024-10-30 09:41:54.242909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:15.737 [2024-10-30 09:41:54.242955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.737 [2024-10-30 09:41:54.242968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:15.737 [2024-10-30 09:41:54.243249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:15.737 [2024-10-30 09:41:54.243415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.737 [2024-10-30 09:41:54.243426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.737 [2024-10-30 09:41:54.243683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.737 BaseBdev3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 [ 00:07:15.737 { 00:07:15.737 "name": "BaseBdev3", 00:07:15.737 "aliases": [ 00:07:15.737 "4feb386d-583c-4516-8e3e-e8d0b8b476ba" 00:07:15.737 ], 00:07:15.737 "product_name": "Malloc disk", 00:07:15.737 "block_size": 512, 00:07:15.737 "num_blocks": 65536, 00:07:15.737 "uuid": "4feb386d-583c-4516-8e3e-e8d0b8b476ba", 00:07:15.737 "assigned_rate_limits": { 00:07:15.737 "rw_ios_per_sec": 0, 00:07:15.737 "rw_mbytes_per_sec": 0, 00:07:15.737 "r_mbytes_per_sec": 0, 00:07:15.737 "w_mbytes_per_sec": 0 00:07:15.737 }, 00:07:15.737 "claimed": true, 00:07:15.737 "claim_type": "exclusive_write", 00:07:15.737 "zoned": false, 00:07:15.737 "supported_io_types": { 00:07:15.737 "read": true, 00:07:15.737 "write": true, 00:07:15.737 "unmap": true, 00:07:15.737 "flush": true, 00:07:15.737 "reset": true, 00:07:15.737 "nvme_admin": false, 00:07:15.737 "nvme_io": false, 00:07:15.737 "nvme_io_md": false, 00:07:15.737 "write_zeroes": true, 00:07:15.737 "zcopy": true, 00:07:15.737 "get_zone_info": false, 00:07:15.737 "zone_management": false, 00:07:15.737 "zone_append": false, 00:07:15.737 "compare": false, 00:07:15.737 "compare_and_write": false, 00:07:15.737 "abort": true, 00:07:15.737 "seek_hole": false, 00:07:15.737 "seek_data": false, 00:07:15.737 "copy": true, 00:07:15.737 "nvme_iov_md": false 00:07:15.737 }, 00:07:15.737 "memory_domains": [ 00:07:15.737 { 00:07:15.737 "dma_device_id": "system", 00:07:15.737 "dma_device_type": 1 00:07:15.737 }, 00:07:15.737 { 00:07:15.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.737 "dma_device_type": 2 00:07:15.737 } 00:07:15.737 ], 00:07:15.737 "driver_specific": {} 00:07:15.737 } 00:07:15.737 ] 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.737 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.738 "name": "Existed_Raid", 00:07:15.738 "uuid": "dc42a2c9-ebda-4039-b6e0-e101a9e1370d", 00:07:15.738 "strip_size_kb": 64, 00:07:15.738 "state": "online", 00:07:15.738 "raid_level": "raid0", 00:07:15.738 "superblock": false, 00:07:15.738 "num_base_bdevs": 3, 00:07:15.738 "num_base_bdevs_discovered": 3, 00:07:15.738 "num_base_bdevs_operational": 3, 00:07:15.738 "base_bdevs_list": [ 00:07:15.738 { 00:07:15.738 "name": "BaseBdev1", 00:07:15.738 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:15.738 "is_configured": true, 00:07:15.738 "data_offset": 0, 00:07:15.738 "data_size": 65536 00:07:15.738 }, 00:07:15.738 { 00:07:15.738 "name": "BaseBdev2", 00:07:15.738 "uuid": "a05ef386-936e-46b9-94f4-bb87fce198d9", 00:07:15.738 "is_configured": true, 00:07:15.738 "data_offset": 0, 00:07:15.738 "data_size": 65536 00:07:15.738 }, 00:07:15.738 { 00:07:15.738 "name": "BaseBdev3", 00:07:15.738 "uuid": "4feb386d-583c-4516-8e3e-e8d0b8b476ba", 00:07:15.738 "is_configured": true, 00:07:15.738 "data_offset": 0, 00:07:15.738 "data_size": 65536 00:07:15.738 } 00:07:15.738 ] 00:07:15.738 }' 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.738 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.000 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.001 [2024-10-30 09:41:54.607395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.001 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.263 "name": "Existed_Raid", 00:07:16.263 "aliases": [ 00:07:16.263 "dc42a2c9-ebda-4039-b6e0-e101a9e1370d" 00:07:16.263 ], 00:07:16.263 "product_name": "Raid Volume", 00:07:16.263 "block_size": 512, 00:07:16.263 "num_blocks": 196608, 00:07:16.263 "uuid": "dc42a2c9-ebda-4039-b6e0-e101a9e1370d", 00:07:16.263 "assigned_rate_limits": { 00:07:16.263 "rw_ios_per_sec": 0, 00:07:16.263 "rw_mbytes_per_sec": 0, 00:07:16.263 "r_mbytes_per_sec": 0, 00:07:16.263 "w_mbytes_per_sec": 0 00:07:16.263 }, 00:07:16.263 "claimed": false, 00:07:16.263 "zoned": false, 00:07:16.263 "supported_io_types": { 00:07:16.263 "read": true, 00:07:16.263 "write": true, 00:07:16.263 "unmap": true, 00:07:16.263 "flush": true, 00:07:16.263 "reset": true, 00:07:16.263 "nvme_admin": false, 00:07:16.263 "nvme_io": false, 00:07:16.263 "nvme_io_md": false, 00:07:16.263 "write_zeroes": true, 00:07:16.263 "zcopy": false, 00:07:16.263 "get_zone_info": false, 00:07:16.263 "zone_management": false, 00:07:16.263 "zone_append": false, 00:07:16.263 "compare": false, 00:07:16.263 "compare_and_write": false, 00:07:16.263 "abort": false, 00:07:16.263 "seek_hole": false, 00:07:16.263 "seek_data": false, 00:07:16.263 "copy": false, 00:07:16.263 "nvme_iov_md": false 00:07:16.263 }, 00:07:16.263 "memory_domains": [ 00:07:16.263 { 00:07:16.263 "dma_device_id": "system", 00:07:16.263 "dma_device_type": 1 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.263 "dma_device_type": 2 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "dma_device_id": "system", 00:07:16.263 "dma_device_type": 1 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.263 "dma_device_type": 2 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "dma_device_id": "system", 00:07:16.263 "dma_device_type": 1 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.263 "dma_device_type": 2 00:07:16.263 } 00:07:16.263 ], 00:07:16.263 "driver_specific": { 00:07:16.263 "raid": { 00:07:16.263 "uuid": "dc42a2c9-ebda-4039-b6e0-e101a9e1370d", 00:07:16.263 "strip_size_kb": 64, 00:07:16.263 "state": "online", 00:07:16.263 "raid_level": "raid0", 00:07:16.263 "superblock": false, 00:07:16.263 "num_base_bdevs": 3, 00:07:16.263 "num_base_bdevs_discovered": 3, 00:07:16.263 "num_base_bdevs_operational": 3, 00:07:16.263 "base_bdevs_list": [ 00:07:16.263 { 00:07:16.263 "name": "BaseBdev1", 00:07:16.263 "uuid": "62a20790-5662-4db8-944d-44a26d522f99", 00:07:16.263 "is_configured": true, 00:07:16.263 "data_offset": 0, 00:07:16.263 "data_size": 65536 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "name": "BaseBdev2", 00:07:16.263 "uuid": "a05ef386-936e-46b9-94f4-bb87fce198d9", 00:07:16.263 "is_configured": true, 00:07:16.263 "data_offset": 0, 00:07:16.263 "data_size": 65536 00:07:16.263 }, 00:07:16.263 { 00:07:16.263 "name": "BaseBdev3", 00:07:16.263 "uuid": "4feb386d-583c-4516-8e3e-e8d0b8b476ba", 00:07:16.263 "is_configured": true, 00:07:16.263 "data_offset": 0, 00:07:16.263 "data_size": 65536 00:07:16.263 } 00:07:16.263 ] 00:07:16.263 } 00:07:16.263 } 00:07:16.263 }' 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.263 BaseBdev2 00:07:16.263 BaseBdev3' 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.263 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.264 [2024-10-30 09:41:54.807135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.264 [2024-10-30 09:41:54.807160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.264 [2024-10-30 09:41:54.807211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.264 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.525 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.525 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.525 "name": "Existed_Raid", 00:07:16.525 "uuid": "dc42a2c9-ebda-4039-b6e0-e101a9e1370d", 00:07:16.525 "strip_size_kb": 64, 00:07:16.525 "state": "offline", 00:07:16.525 "raid_level": "raid0", 00:07:16.525 "superblock": false, 00:07:16.525 "num_base_bdevs": 3, 00:07:16.525 "num_base_bdevs_discovered": 2, 00:07:16.525 "num_base_bdevs_operational": 2, 00:07:16.525 "base_bdevs_list": [ 00:07:16.525 { 00:07:16.525 "name": null, 00:07:16.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.525 "is_configured": false, 00:07:16.525 "data_offset": 0, 00:07:16.525 "data_size": 65536 00:07:16.525 }, 00:07:16.525 { 00:07:16.525 "name": "BaseBdev2", 00:07:16.525 "uuid": "a05ef386-936e-46b9-94f4-bb87fce198d9", 00:07:16.525 "is_configured": true, 00:07:16.525 "data_offset": 0, 00:07:16.525 "data_size": 65536 00:07:16.525 }, 00:07:16.525 { 00:07:16.525 "name": "BaseBdev3", 00:07:16.525 "uuid": "4feb386d-583c-4516-8e3e-e8d0b8b476ba", 00:07:16.525 "is_configured": true, 00:07:16.525 "data_offset": 0, 00:07:16.525 "data_size": 65536 00:07:16.525 } 00:07:16.525 ] 00:07:16.525 }' 00:07:16.525 09:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.525 09:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 [2024-10-30 09:41:55.206458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.787 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.788 [2024-10-30 09:41:55.305713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:16.788 [2024-10-30 09:41:55.305758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.788 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 BaseBdev2 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 [ 00:07:17.050 { 00:07:17.050 "name": "BaseBdev2", 00:07:17.050 "aliases": [ 00:07:17.050 "38965e36-c64a-48e1-84f9-eef41622a1b3" 00:07:17.050 ], 00:07:17.050 "product_name": "Malloc disk", 00:07:17.050 "block_size": 512, 00:07:17.050 "num_blocks": 65536, 00:07:17.050 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:17.050 "assigned_rate_limits": { 00:07:17.050 "rw_ios_per_sec": 0, 00:07:17.050 "rw_mbytes_per_sec": 0, 00:07:17.050 "r_mbytes_per_sec": 0, 00:07:17.050 "w_mbytes_per_sec": 0 00:07:17.050 }, 00:07:17.050 "claimed": false, 00:07:17.050 "zoned": false, 00:07:17.050 "supported_io_types": { 00:07:17.050 "read": true, 00:07:17.050 "write": true, 00:07:17.050 "unmap": true, 00:07:17.050 "flush": true, 00:07:17.050 "reset": true, 00:07:17.050 "nvme_admin": false, 00:07:17.050 "nvme_io": false, 00:07:17.050 "nvme_io_md": false, 00:07:17.050 "write_zeroes": true, 00:07:17.050 "zcopy": true, 00:07:17.050 "get_zone_info": false, 00:07:17.050 "zone_management": false, 00:07:17.050 "zone_append": false, 00:07:17.050 "compare": false, 00:07:17.050 "compare_and_write": false, 00:07:17.050 "abort": true, 00:07:17.050 "seek_hole": false, 00:07:17.050 "seek_data": false, 00:07:17.050 "copy": true, 00:07:17.050 "nvme_iov_md": false 00:07:17.050 }, 00:07:17.050 "memory_domains": [ 00:07:17.050 { 00:07:17.050 "dma_device_id": "system", 00:07:17.050 "dma_device_type": 1 00:07:17.050 }, 00:07:17.050 { 00:07:17.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.050 "dma_device_type": 2 00:07:17.050 } 00:07:17.050 ], 00:07:17.050 "driver_specific": {} 00:07:17.050 } 00:07:17.050 ] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 BaseBdev3 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 [ 00:07:17.050 { 00:07:17.050 "name": "BaseBdev3", 00:07:17.050 "aliases": [ 00:07:17.050 "5cd79faf-c2ee-4284-bd33-cac928e36fa4" 00:07:17.050 ], 00:07:17.050 "product_name": "Malloc disk", 00:07:17.050 "block_size": 512, 00:07:17.050 "num_blocks": 65536, 00:07:17.050 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:17.050 "assigned_rate_limits": { 00:07:17.050 "rw_ios_per_sec": 0, 00:07:17.050 "rw_mbytes_per_sec": 0, 00:07:17.050 "r_mbytes_per_sec": 0, 00:07:17.050 "w_mbytes_per_sec": 0 00:07:17.050 }, 00:07:17.050 "claimed": false, 00:07:17.050 "zoned": false, 00:07:17.050 "supported_io_types": { 00:07:17.050 "read": true, 00:07:17.050 "write": true, 00:07:17.050 "unmap": true, 00:07:17.050 "flush": true, 00:07:17.050 "reset": true, 00:07:17.050 "nvme_admin": false, 00:07:17.050 "nvme_io": false, 00:07:17.050 "nvme_io_md": false, 00:07:17.050 "write_zeroes": true, 00:07:17.050 "zcopy": true, 00:07:17.050 "get_zone_info": false, 00:07:17.050 "zone_management": false, 00:07:17.050 "zone_append": false, 00:07:17.050 "compare": false, 00:07:17.050 "compare_and_write": false, 00:07:17.050 "abort": true, 00:07:17.050 "seek_hole": false, 00:07:17.050 "seek_data": false, 00:07:17.050 "copy": true, 00:07:17.050 "nvme_iov_md": false 00:07:17.050 }, 00:07:17.050 "memory_domains": [ 00:07:17.050 { 00:07:17.050 "dma_device_id": "system", 00:07:17.050 "dma_device_type": 1 00:07:17.050 }, 00:07:17.050 { 00:07:17.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.050 "dma_device_type": 2 00:07:17.050 } 00:07:17.050 ], 00:07:17.050 "driver_specific": {} 00:07:17.050 } 00:07:17.050 ] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.050 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.050 [2024-10-30 09:41:55.512834] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.050 [2024-10-30 09:41:55.512980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.050 [2024-10-30 09:41:55.513012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.050 [2024-10-30 09:41:55.514854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.051 "name": "Existed_Raid", 00:07:17.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.051 "strip_size_kb": 64, 00:07:17.051 "state": "configuring", 00:07:17.051 "raid_level": "raid0", 00:07:17.051 "superblock": false, 00:07:17.051 "num_base_bdevs": 3, 00:07:17.051 "num_base_bdevs_discovered": 2, 00:07:17.051 "num_base_bdevs_operational": 3, 00:07:17.051 "base_bdevs_list": [ 00:07:17.051 { 00:07:17.051 "name": "BaseBdev1", 00:07:17.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.051 "is_configured": false, 00:07:17.051 "data_offset": 0, 00:07:17.051 "data_size": 0 00:07:17.051 }, 00:07:17.051 { 00:07:17.051 "name": "BaseBdev2", 00:07:17.051 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:17.051 "is_configured": true, 00:07:17.051 "data_offset": 0, 00:07:17.051 "data_size": 65536 00:07:17.051 }, 00:07:17.051 { 00:07:17.051 "name": "BaseBdev3", 00:07:17.051 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:17.051 "is_configured": true, 00:07:17.051 "data_offset": 0, 00:07:17.051 "data_size": 65536 00:07:17.051 } 00:07:17.051 ] 00:07:17.051 }' 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.051 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.313 [2024-10-30 09:41:55.828892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.313 "name": "Existed_Raid", 00:07:17.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.313 "strip_size_kb": 64, 00:07:17.313 "state": "configuring", 00:07:17.313 "raid_level": "raid0", 00:07:17.313 "superblock": false, 00:07:17.313 "num_base_bdevs": 3, 00:07:17.313 "num_base_bdevs_discovered": 1, 00:07:17.313 "num_base_bdevs_operational": 3, 00:07:17.313 "base_bdevs_list": [ 00:07:17.313 { 00:07:17.313 "name": "BaseBdev1", 00:07:17.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.313 "is_configured": false, 00:07:17.313 "data_offset": 0, 00:07:17.313 "data_size": 0 00:07:17.313 }, 00:07:17.313 { 00:07:17.313 "name": null, 00:07:17.313 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:17.313 "is_configured": false, 00:07:17.313 "data_offset": 0, 00:07:17.313 "data_size": 65536 00:07:17.313 }, 00:07:17.313 { 00:07:17.313 "name": "BaseBdev3", 00:07:17.313 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:17.313 "is_configured": true, 00:07:17.313 "data_offset": 0, 00:07:17.313 "data_size": 65536 00:07:17.313 } 00:07:17.313 ] 00:07:17.313 }' 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.313 09:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.574 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 [2024-10-30 09:41:56.207321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.836 BaseBdev1 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 [ 00:07:17.836 { 00:07:17.836 "name": "BaseBdev1", 00:07:17.836 "aliases": [ 00:07:17.836 "d844530f-25c0-4d4c-9e9a-9e819b1c5404" 00:07:17.836 ], 00:07:17.836 "product_name": "Malloc disk", 00:07:17.836 "block_size": 512, 00:07:17.836 "num_blocks": 65536, 00:07:17.836 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:17.836 "assigned_rate_limits": { 00:07:17.836 "rw_ios_per_sec": 0, 00:07:17.836 "rw_mbytes_per_sec": 0, 00:07:17.836 "r_mbytes_per_sec": 0, 00:07:17.836 "w_mbytes_per_sec": 0 00:07:17.836 }, 00:07:17.836 "claimed": true, 00:07:17.836 "claim_type": "exclusive_write", 00:07:17.836 "zoned": false, 00:07:17.836 "supported_io_types": { 00:07:17.836 "read": true, 00:07:17.836 "write": true, 00:07:17.836 "unmap": true, 00:07:17.836 "flush": true, 00:07:17.836 "reset": true, 00:07:17.836 "nvme_admin": false, 00:07:17.836 "nvme_io": false, 00:07:17.836 "nvme_io_md": false, 00:07:17.836 "write_zeroes": true, 00:07:17.836 "zcopy": true, 00:07:17.836 "get_zone_info": false, 00:07:17.836 "zone_management": false, 00:07:17.836 "zone_append": false, 00:07:17.836 "compare": false, 00:07:17.836 "compare_and_write": false, 00:07:17.836 "abort": true, 00:07:17.836 "seek_hole": false, 00:07:17.836 "seek_data": false, 00:07:17.836 "copy": true, 00:07:17.836 "nvme_iov_md": false 00:07:17.836 }, 00:07:17.836 "memory_domains": [ 00:07:17.836 { 00:07:17.836 "dma_device_id": "system", 00:07:17.836 "dma_device_type": 1 00:07:17.836 }, 00:07:17.836 { 00:07:17.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.836 "dma_device_type": 2 00:07:17.836 } 00:07:17.836 ], 00:07:17.836 "driver_specific": {} 00:07:17.836 } 00:07:17.836 ] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.836 "name": "Existed_Raid", 00:07:17.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.836 "strip_size_kb": 64, 00:07:17.836 "state": "configuring", 00:07:17.836 "raid_level": "raid0", 00:07:17.836 "superblock": false, 00:07:17.836 "num_base_bdevs": 3, 00:07:17.836 "num_base_bdevs_discovered": 2, 00:07:17.836 "num_base_bdevs_operational": 3, 00:07:17.836 "base_bdevs_list": [ 00:07:17.836 { 00:07:17.836 "name": "BaseBdev1", 00:07:17.836 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:17.836 "is_configured": true, 00:07:17.836 "data_offset": 0, 00:07:17.836 "data_size": 65536 00:07:17.836 }, 00:07:17.836 { 00:07:17.836 "name": null, 00:07:17.836 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:17.836 "is_configured": false, 00:07:17.836 "data_offset": 0, 00:07:17.836 "data_size": 65536 00:07:17.836 }, 00:07:17.836 { 00:07:17.836 "name": "BaseBdev3", 00:07:17.836 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:17.836 "is_configured": true, 00:07:17.836 "data_offset": 0, 00:07:17.836 "data_size": 65536 00:07:17.836 } 00:07:17.836 ] 00:07:17.836 }' 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.836 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 [2024-10-30 09:41:56.563455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.097 "name": "Existed_Raid", 00:07:18.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.097 "strip_size_kb": 64, 00:07:18.097 "state": "configuring", 00:07:18.097 "raid_level": "raid0", 00:07:18.097 "superblock": false, 00:07:18.097 "num_base_bdevs": 3, 00:07:18.097 "num_base_bdevs_discovered": 1, 00:07:18.097 "num_base_bdevs_operational": 3, 00:07:18.097 "base_bdevs_list": [ 00:07:18.097 { 00:07:18.097 "name": "BaseBdev1", 00:07:18.097 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:18.097 "is_configured": true, 00:07:18.097 "data_offset": 0, 00:07:18.097 "data_size": 65536 00:07:18.097 }, 00:07:18.097 { 00:07:18.097 "name": null, 00:07:18.097 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:18.097 "is_configured": false, 00:07:18.097 "data_offset": 0, 00:07:18.097 "data_size": 65536 00:07:18.097 }, 00:07:18.097 { 00:07:18.097 "name": null, 00:07:18.097 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:18.097 "is_configured": false, 00:07:18.097 "data_offset": 0, 00:07:18.097 "data_size": 65536 00:07:18.097 } 00:07:18.097 ] 00:07:18.097 }' 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.097 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.357 [2024-10-30 09:41:56.907558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.357 "name": "Existed_Raid", 00:07:18.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.357 "strip_size_kb": 64, 00:07:18.357 "state": "configuring", 00:07:18.357 "raid_level": "raid0", 00:07:18.357 "superblock": false, 00:07:18.357 "num_base_bdevs": 3, 00:07:18.357 "num_base_bdevs_discovered": 2, 00:07:18.357 "num_base_bdevs_operational": 3, 00:07:18.357 "base_bdevs_list": [ 00:07:18.357 { 00:07:18.357 "name": "BaseBdev1", 00:07:18.357 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:18.357 "is_configured": true, 00:07:18.357 "data_offset": 0, 00:07:18.357 "data_size": 65536 00:07:18.357 }, 00:07:18.357 { 00:07:18.357 "name": null, 00:07:18.357 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:18.357 "is_configured": false, 00:07:18.357 "data_offset": 0, 00:07:18.357 "data_size": 65536 00:07:18.357 }, 00:07:18.357 { 00:07:18.357 "name": "BaseBdev3", 00:07:18.357 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:18.357 "is_configured": true, 00:07:18.357 "data_offset": 0, 00:07:18.357 "data_size": 65536 00:07:18.357 } 00:07:18.357 ] 00:07:18.357 }' 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.357 09:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.930 [2024-10-30 09:41:57.275669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.930 "name": "Existed_Raid", 00:07:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.930 "strip_size_kb": 64, 00:07:18.930 "state": "configuring", 00:07:18.930 "raid_level": "raid0", 00:07:18.930 "superblock": false, 00:07:18.930 "num_base_bdevs": 3, 00:07:18.930 "num_base_bdevs_discovered": 1, 00:07:18.930 "num_base_bdevs_operational": 3, 00:07:18.930 "base_bdevs_list": [ 00:07:18.930 { 00:07:18.930 "name": null, 00:07:18.930 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:18.930 "is_configured": false, 00:07:18.930 "data_offset": 0, 00:07:18.930 "data_size": 65536 00:07:18.930 }, 00:07:18.930 { 00:07:18.930 "name": null, 00:07:18.930 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:18.930 "is_configured": false, 00:07:18.930 "data_offset": 0, 00:07:18.930 "data_size": 65536 00:07:18.930 }, 00:07:18.930 { 00:07:18.930 "name": "BaseBdev3", 00:07:18.930 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:18.930 "is_configured": true, 00:07:18.930 "data_offset": 0, 00:07:18.930 "data_size": 65536 00:07:18.930 } 00:07:18.930 ] 00:07:18.930 }' 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.930 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:19.191 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.192 [2024-10-30 09:41:57.693995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.192 "name": "Existed_Raid", 00:07:19.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.192 "strip_size_kb": 64, 00:07:19.192 "state": "configuring", 00:07:19.192 "raid_level": "raid0", 00:07:19.192 "superblock": false, 00:07:19.192 "num_base_bdevs": 3, 00:07:19.192 "num_base_bdevs_discovered": 2, 00:07:19.192 "num_base_bdevs_operational": 3, 00:07:19.192 "base_bdevs_list": [ 00:07:19.192 { 00:07:19.192 "name": null, 00:07:19.192 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:19.192 "is_configured": false, 00:07:19.192 "data_offset": 0, 00:07:19.192 "data_size": 65536 00:07:19.192 }, 00:07:19.192 { 00:07:19.192 "name": "BaseBdev2", 00:07:19.192 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:19.192 "is_configured": true, 00:07:19.192 "data_offset": 0, 00:07:19.192 "data_size": 65536 00:07:19.192 }, 00:07:19.192 { 00:07:19.192 "name": "BaseBdev3", 00:07:19.192 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:19.192 "is_configured": true, 00:07:19.192 "data_offset": 0, 00:07:19.192 "data_size": 65536 00:07:19.192 } 00:07:19.192 ] 00:07:19.192 }' 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.192 09:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.454 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.718 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d844530f-25c0-4d4c-9e9a-9e819b1c5404 00:07:19.718 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.718 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.718 [2024-10-30 09:41:58.100398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:19.718 [2024-10-30 09:41:58.100432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:19.718 [2024-10-30 09:41:58.100442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:19.718 [2024-10-30 09:41:58.100687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:19.718 [2024-10-30 09:41:58.100807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:19.718 [2024-10-30 09:41:58.100816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:19.718 [2024-10-30 09:41:58.101025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.718 NewBaseBdev 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 [ 00:07:19.719 { 00:07:19.719 "name": "NewBaseBdev", 00:07:19.719 "aliases": [ 00:07:19.719 "d844530f-25c0-4d4c-9e9a-9e819b1c5404" 00:07:19.719 ], 00:07:19.719 "product_name": "Malloc disk", 00:07:19.719 "block_size": 512, 00:07:19.719 "num_blocks": 65536, 00:07:19.719 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:19.719 "assigned_rate_limits": { 00:07:19.719 "rw_ios_per_sec": 0, 00:07:19.719 "rw_mbytes_per_sec": 0, 00:07:19.719 "r_mbytes_per_sec": 0, 00:07:19.719 "w_mbytes_per_sec": 0 00:07:19.719 }, 00:07:19.719 "claimed": true, 00:07:19.719 "claim_type": "exclusive_write", 00:07:19.719 "zoned": false, 00:07:19.719 "supported_io_types": { 00:07:19.719 "read": true, 00:07:19.719 "write": true, 00:07:19.719 "unmap": true, 00:07:19.719 "flush": true, 00:07:19.719 "reset": true, 00:07:19.719 "nvme_admin": false, 00:07:19.719 "nvme_io": false, 00:07:19.719 "nvme_io_md": false, 00:07:19.719 "write_zeroes": true, 00:07:19.719 "zcopy": true, 00:07:19.719 "get_zone_info": false, 00:07:19.719 "zone_management": false, 00:07:19.719 "zone_append": false, 00:07:19.719 "compare": false, 00:07:19.719 "compare_and_write": false, 00:07:19.719 "abort": true, 00:07:19.719 "seek_hole": false, 00:07:19.719 "seek_data": false, 00:07:19.719 "copy": true, 00:07:19.719 "nvme_iov_md": false 00:07:19.719 }, 00:07:19.719 "memory_domains": [ 00:07:19.719 { 00:07:19.719 "dma_device_id": "system", 00:07:19.719 "dma_device_type": 1 00:07:19.719 }, 00:07:19.719 { 00:07:19.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.719 "dma_device_type": 2 00:07:19.719 } 00:07:19.719 ], 00:07:19.719 "driver_specific": {} 00:07:19.719 } 00:07:19.719 ] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.719 "name": "Existed_Raid", 00:07:19.719 "uuid": "e0b7d497-94b6-4be3-ad86-77c755d2754b", 00:07:19.719 "strip_size_kb": 64, 00:07:19.719 "state": "online", 00:07:19.719 "raid_level": "raid0", 00:07:19.719 "superblock": false, 00:07:19.719 "num_base_bdevs": 3, 00:07:19.719 "num_base_bdevs_discovered": 3, 00:07:19.719 "num_base_bdevs_operational": 3, 00:07:19.719 "base_bdevs_list": [ 00:07:19.719 { 00:07:19.719 "name": "NewBaseBdev", 00:07:19.719 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:19.719 "is_configured": true, 00:07:19.719 "data_offset": 0, 00:07:19.719 "data_size": 65536 00:07:19.719 }, 00:07:19.719 { 00:07:19.719 "name": "BaseBdev2", 00:07:19.719 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:19.719 "is_configured": true, 00:07:19.719 "data_offset": 0, 00:07:19.719 "data_size": 65536 00:07:19.719 }, 00:07:19.719 { 00:07:19.719 "name": "BaseBdev3", 00:07:19.719 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:19.719 "is_configured": true, 00:07:19.719 "data_offset": 0, 00:07:19.719 "data_size": 65536 00:07:19.719 } 00:07:19.719 ] 00:07:19.719 }' 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.719 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.980 [2024-10-30 09:41:58.456866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.980 "name": "Existed_Raid", 00:07:19.980 "aliases": [ 00:07:19.980 "e0b7d497-94b6-4be3-ad86-77c755d2754b" 00:07:19.980 ], 00:07:19.980 "product_name": "Raid Volume", 00:07:19.980 "block_size": 512, 00:07:19.980 "num_blocks": 196608, 00:07:19.980 "uuid": "e0b7d497-94b6-4be3-ad86-77c755d2754b", 00:07:19.980 "assigned_rate_limits": { 00:07:19.980 "rw_ios_per_sec": 0, 00:07:19.980 "rw_mbytes_per_sec": 0, 00:07:19.980 "r_mbytes_per_sec": 0, 00:07:19.980 "w_mbytes_per_sec": 0 00:07:19.980 }, 00:07:19.980 "claimed": false, 00:07:19.980 "zoned": false, 00:07:19.980 "supported_io_types": { 00:07:19.980 "read": true, 00:07:19.980 "write": true, 00:07:19.980 "unmap": true, 00:07:19.980 "flush": true, 00:07:19.980 "reset": true, 00:07:19.980 "nvme_admin": false, 00:07:19.980 "nvme_io": false, 00:07:19.980 "nvme_io_md": false, 00:07:19.980 "write_zeroes": true, 00:07:19.980 "zcopy": false, 00:07:19.980 "get_zone_info": false, 00:07:19.980 "zone_management": false, 00:07:19.980 "zone_append": false, 00:07:19.980 "compare": false, 00:07:19.980 "compare_and_write": false, 00:07:19.980 "abort": false, 00:07:19.980 "seek_hole": false, 00:07:19.980 "seek_data": false, 00:07:19.980 "copy": false, 00:07:19.980 "nvme_iov_md": false 00:07:19.980 }, 00:07:19.980 "memory_domains": [ 00:07:19.980 { 00:07:19.980 "dma_device_id": "system", 00:07:19.980 "dma_device_type": 1 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.980 "dma_device_type": 2 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "dma_device_id": "system", 00:07:19.980 "dma_device_type": 1 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.980 "dma_device_type": 2 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "dma_device_id": "system", 00:07:19.980 "dma_device_type": 1 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.980 "dma_device_type": 2 00:07:19.980 } 00:07:19.980 ], 00:07:19.980 "driver_specific": { 00:07:19.980 "raid": { 00:07:19.980 "uuid": "e0b7d497-94b6-4be3-ad86-77c755d2754b", 00:07:19.980 "strip_size_kb": 64, 00:07:19.980 "state": "online", 00:07:19.980 "raid_level": "raid0", 00:07:19.980 "superblock": false, 00:07:19.980 "num_base_bdevs": 3, 00:07:19.980 "num_base_bdevs_discovered": 3, 00:07:19.980 "num_base_bdevs_operational": 3, 00:07:19.980 "base_bdevs_list": [ 00:07:19.980 { 00:07:19.980 "name": "NewBaseBdev", 00:07:19.980 "uuid": "d844530f-25c0-4d4c-9e9a-9e819b1c5404", 00:07:19.980 "is_configured": true, 00:07:19.980 "data_offset": 0, 00:07:19.980 "data_size": 65536 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "name": "BaseBdev2", 00:07:19.980 "uuid": "38965e36-c64a-48e1-84f9-eef41622a1b3", 00:07:19.980 "is_configured": true, 00:07:19.980 "data_offset": 0, 00:07:19.980 "data_size": 65536 00:07:19.980 }, 00:07:19.980 { 00:07:19.980 "name": "BaseBdev3", 00:07:19.980 "uuid": "5cd79faf-c2ee-4284-bd33-cac928e36fa4", 00:07:19.980 "is_configured": true, 00:07:19.980 "data_offset": 0, 00:07:19.980 "data_size": 65536 00:07:19.980 } 00:07:19.980 ] 00:07:19.980 } 00:07:19.980 } 00:07:19.980 }' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:19.980 BaseBdev2 00:07:19.980 BaseBdev3' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.980 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 [2024-10-30 09:41:58.640575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.241 [2024-10-30 09:41:58.640599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.241 [2024-10-30 09:41:58.640662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.241 [2024-10-30 09:41:58.640717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.241 [2024-10-30 09:41:58.640729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62495 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62495 ']' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62495 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62495 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62495' 00:07:20.241 killing process with pid 62495 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62495 00:07:20.241 [2024-10-30 09:41:58.678545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.241 09:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62495 00:07:20.500 [2024-10-30 09:41:58.866516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.068 00:07:21.068 real 0m7.716s 00:07:21.068 user 0m12.321s 00:07:21.068 sys 0m1.175s 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.068 ************************************ 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.068 END TEST raid_state_function_test 00:07:21.068 ************************************ 00:07:21.068 09:41:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:21.068 09:41:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:21.068 09:41:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.068 09:41:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.068 ************************************ 00:07:21.068 START TEST raid_state_function_test_sb 00:07:21.068 ************************************ 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.068 Process raid pid: 63094 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63094 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63094' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63094 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63094 ']' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.068 09:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.326 [2024-10-30 09:41:59.721580] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:21.327 [2024-10-30 09:41:59.721855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.327 [2024-10-30 09:41:59.879605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.586 [2024-10-30 09:41:59.983562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.586 [2024-10-30 09:42:00.122498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.586 [2024-10-30 09:42:00.122684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.159 [2024-10-30 09:42:00.589650] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.159 [2024-10-30 09:42:00.589701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.159 [2024-10-30 09:42:00.589712] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.159 [2024-10-30 09:42:00.589723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.159 [2024-10-30 09:42:00.589730] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.159 [2024-10-30 09:42:00.589740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.159 "name": "Existed_Raid", 00:07:22.159 "uuid": "86dcd0e0-a1c7-4989-aa57-98f605e0bb7c", 00:07:22.159 "strip_size_kb": 64, 00:07:22.159 "state": "configuring", 00:07:22.159 "raid_level": "raid0", 00:07:22.159 "superblock": true, 00:07:22.159 "num_base_bdevs": 3, 00:07:22.159 "num_base_bdevs_discovered": 0, 00:07:22.159 "num_base_bdevs_operational": 3, 00:07:22.159 "base_bdevs_list": [ 00:07:22.159 { 00:07:22.159 "name": "BaseBdev1", 00:07:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.159 "is_configured": false, 00:07:22.159 "data_offset": 0, 00:07:22.159 "data_size": 0 00:07:22.159 }, 00:07:22.159 { 00:07:22.159 "name": "BaseBdev2", 00:07:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.159 "is_configured": false, 00:07:22.159 "data_offset": 0, 00:07:22.159 "data_size": 0 00:07:22.159 }, 00:07:22.159 { 00:07:22.159 "name": "BaseBdev3", 00:07:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.159 "is_configured": false, 00:07:22.159 "data_offset": 0, 00:07:22.159 "data_size": 0 00:07:22.159 } 00:07:22.159 ] 00:07:22.159 }' 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.159 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.421 [2024-10-30 09:42:00.905665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.421 [2024-10-30 09:42:00.905696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.421 [2024-10-30 09:42:00.913678] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.421 [2024-10-30 09:42:00.913718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.421 [2024-10-30 09:42:00.913727] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.421 [2024-10-30 09:42:00.913737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.421 [2024-10-30 09:42:00.913743] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.421 [2024-10-30 09:42:00.913753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.421 [2024-10-30 09:42:00.946079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.421 BaseBdev1 00:07:22.421 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.422 [ 00:07:22.422 { 00:07:22.422 "name": "BaseBdev1", 00:07:22.422 "aliases": [ 00:07:22.422 "6a8326fb-7c41-419b-bbe5-13216acc5891" 00:07:22.422 ], 00:07:22.422 "product_name": "Malloc disk", 00:07:22.422 "block_size": 512, 00:07:22.422 "num_blocks": 65536, 00:07:22.422 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:22.422 "assigned_rate_limits": { 00:07:22.422 "rw_ios_per_sec": 0, 00:07:22.422 "rw_mbytes_per_sec": 0, 00:07:22.422 "r_mbytes_per_sec": 0, 00:07:22.422 "w_mbytes_per_sec": 0 00:07:22.422 }, 00:07:22.422 "claimed": true, 00:07:22.422 "claim_type": "exclusive_write", 00:07:22.422 "zoned": false, 00:07:22.422 "supported_io_types": { 00:07:22.422 "read": true, 00:07:22.422 "write": true, 00:07:22.422 "unmap": true, 00:07:22.422 "flush": true, 00:07:22.422 "reset": true, 00:07:22.422 "nvme_admin": false, 00:07:22.422 "nvme_io": false, 00:07:22.422 "nvme_io_md": false, 00:07:22.422 "write_zeroes": true, 00:07:22.422 "zcopy": true, 00:07:22.422 "get_zone_info": false, 00:07:22.422 "zone_management": false, 00:07:22.422 "zone_append": false, 00:07:22.422 "compare": false, 00:07:22.422 "compare_and_write": false, 00:07:22.422 "abort": true, 00:07:22.422 "seek_hole": false, 00:07:22.422 "seek_data": false, 00:07:22.422 "copy": true, 00:07:22.422 "nvme_iov_md": false 00:07:22.422 }, 00:07:22.422 "memory_domains": [ 00:07:22.422 { 00:07:22.422 "dma_device_id": "system", 00:07:22.422 "dma_device_type": 1 00:07:22.422 }, 00:07:22.422 { 00:07:22.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.422 "dma_device_type": 2 00:07:22.422 } 00:07:22.422 ], 00:07:22.422 "driver_specific": {} 00:07:22.422 } 00:07:22.422 ] 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.422 09:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.422 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.422 "name": "Existed_Raid", 00:07:22.422 "uuid": "4cc2352e-d295-4040-af98-43bdde07b308", 00:07:22.422 "strip_size_kb": 64, 00:07:22.422 "state": "configuring", 00:07:22.422 "raid_level": "raid0", 00:07:22.422 "superblock": true, 00:07:22.422 "num_base_bdevs": 3, 00:07:22.422 "num_base_bdevs_discovered": 1, 00:07:22.422 "num_base_bdevs_operational": 3, 00:07:22.422 "base_bdevs_list": [ 00:07:22.422 { 00:07:22.422 "name": "BaseBdev1", 00:07:22.422 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:22.422 "is_configured": true, 00:07:22.422 "data_offset": 2048, 00:07:22.422 "data_size": 63488 00:07:22.422 }, 00:07:22.422 { 00:07:22.422 "name": "BaseBdev2", 00:07:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.422 "is_configured": false, 00:07:22.422 "data_offset": 0, 00:07:22.422 "data_size": 0 00:07:22.422 }, 00:07:22.422 { 00:07:22.422 "name": "BaseBdev3", 00:07:22.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.422 "is_configured": false, 00:07:22.422 "data_offset": 0, 00:07:22.422 "data_size": 0 00:07:22.422 } 00:07:22.422 ] 00:07:22.422 }' 00:07:22.422 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.422 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 [2024-10-30 09:42:01.282190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.683 [2024-10-30 09:42:01.282339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 [2024-10-30 09:42:01.290247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.683 [2024-10-30 09:42:01.292167] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.683 [2024-10-30 09:42:01.292206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.683 [2024-10-30 09:42:01.292216] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.683 [2024-10-30 09:42:01.292227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.945 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.945 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.945 "name": "Existed_Raid", 00:07:22.945 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:22.945 "strip_size_kb": 64, 00:07:22.945 "state": "configuring", 00:07:22.945 "raid_level": "raid0", 00:07:22.945 "superblock": true, 00:07:22.945 "num_base_bdevs": 3, 00:07:22.945 "num_base_bdevs_discovered": 1, 00:07:22.945 "num_base_bdevs_operational": 3, 00:07:22.945 "base_bdevs_list": [ 00:07:22.945 { 00:07:22.945 "name": "BaseBdev1", 00:07:22.945 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:22.945 "is_configured": true, 00:07:22.945 "data_offset": 2048, 00:07:22.945 "data_size": 63488 00:07:22.945 }, 00:07:22.945 { 00:07:22.945 "name": "BaseBdev2", 00:07:22.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.945 "is_configured": false, 00:07:22.945 "data_offset": 0, 00:07:22.945 "data_size": 0 00:07:22.945 }, 00:07:22.945 { 00:07:22.945 "name": "BaseBdev3", 00:07:22.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.945 "is_configured": false, 00:07:22.945 "data_offset": 0, 00:07:22.945 "data_size": 0 00:07:22.945 } 00:07:22.945 ] 00:07:22.945 }' 00:07:22.945 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.945 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 [2024-10-30 09:42:01.636726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.206 BaseBdev2 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 [ 00:07:23.206 { 00:07:23.206 "name": "BaseBdev2", 00:07:23.206 "aliases": [ 00:07:23.206 "67da7746-5ca8-4aab-94e8-45760821bb67" 00:07:23.206 ], 00:07:23.206 "product_name": "Malloc disk", 00:07:23.206 "block_size": 512, 00:07:23.206 "num_blocks": 65536, 00:07:23.206 "uuid": "67da7746-5ca8-4aab-94e8-45760821bb67", 00:07:23.206 "assigned_rate_limits": { 00:07:23.206 "rw_ios_per_sec": 0, 00:07:23.206 "rw_mbytes_per_sec": 0, 00:07:23.206 "r_mbytes_per_sec": 0, 00:07:23.206 "w_mbytes_per_sec": 0 00:07:23.206 }, 00:07:23.206 "claimed": true, 00:07:23.206 "claim_type": "exclusive_write", 00:07:23.206 "zoned": false, 00:07:23.206 "supported_io_types": { 00:07:23.206 "read": true, 00:07:23.206 "write": true, 00:07:23.206 "unmap": true, 00:07:23.206 "flush": true, 00:07:23.206 "reset": true, 00:07:23.206 "nvme_admin": false, 00:07:23.206 "nvme_io": false, 00:07:23.206 "nvme_io_md": false, 00:07:23.206 "write_zeroes": true, 00:07:23.206 "zcopy": true, 00:07:23.206 "get_zone_info": false, 00:07:23.206 "zone_management": false, 00:07:23.206 "zone_append": false, 00:07:23.206 "compare": false, 00:07:23.206 "compare_and_write": false, 00:07:23.206 "abort": true, 00:07:23.206 "seek_hole": false, 00:07:23.206 "seek_data": false, 00:07:23.206 "copy": true, 00:07:23.206 "nvme_iov_md": false 00:07:23.206 }, 00:07:23.206 "memory_domains": [ 00:07:23.206 { 00:07:23.206 "dma_device_id": "system", 00:07:23.206 "dma_device_type": 1 00:07:23.206 }, 00:07:23.206 { 00:07:23.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.206 "dma_device_type": 2 00:07:23.206 } 00:07:23.206 ], 00:07:23.206 "driver_specific": {} 00:07:23.206 } 00:07:23.206 ] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.206 "name": "Existed_Raid", 00:07:23.206 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:23.206 "strip_size_kb": 64, 00:07:23.206 "state": "configuring", 00:07:23.206 "raid_level": "raid0", 00:07:23.206 "superblock": true, 00:07:23.206 "num_base_bdevs": 3, 00:07:23.206 "num_base_bdevs_discovered": 2, 00:07:23.206 "num_base_bdevs_operational": 3, 00:07:23.206 "base_bdevs_list": [ 00:07:23.206 { 00:07:23.206 "name": "BaseBdev1", 00:07:23.206 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:23.206 "is_configured": true, 00:07:23.206 "data_offset": 2048, 00:07:23.206 "data_size": 63488 00:07:23.206 }, 00:07:23.206 { 00:07:23.206 "name": "BaseBdev2", 00:07:23.206 "uuid": "67da7746-5ca8-4aab-94e8-45760821bb67", 00:07:23.206 "is_configured": true, 00:07:23.206 "data_offset": 2048, 00:07:23.206 "data_size": 63488 00:07:23.206 }, 00:07:23.206 { 00:07:23.206 "name": "BaseBdev3", 00:07:23.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.206 "is_configured": false, 00:07:23.206 "data_offset": 0, 00:07:23.206 "data_size": 0 00:07:23.206 } 00:07:23.206 ] 00:07:23.206 }' 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.206 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.468 09:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:23.468 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.468 09:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.468 [2024-10-30 09:42:02.037026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:23.468 [2024-10-30 09:42:02.037271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.468 [2024-10-30 09:42:02.037292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:23.468 BaseBdev3 00:07:23.468 [2024-10-30 09:42:02.037554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:23.468 [2024-10-30 09:42:02.037687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.468 [2024-10-30 09:42:02.037696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.468 [2024-10-30 09:42:02.037829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.468 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.468 [ 00:07:23.468 { 00:07:23.468 "name": "BaseBdev3", 00:07:23.468 "aliases": [ 00:07:23.468 "7c49d98d-8741-4099-a76b-6aa1dd9590ef" 00:07:23.468 ], 00:07:23.468 "product_name": "Malloc disk", 00:07:23.468 "block_size": 512, 00:07:23.468 "num_blocks": 65536, 00:07:23.468 "uuid": "7c49d98d-8741-4099-a76b-6aa1dd9590ef", 00:07:23.468 "assigned_rate_limits": { 00:07:23.468 "rw_ios_per_sec": 0, 00:07:23.468 "rw_mbytes_per_sec": 0, 00:07:23.468 "r_mbytes_per_sec": 0, 00:07:23.468 "w_mbytes_per_sec": 0 00:07:23.468 }, 00:07:23.468 "claimed": true, 00:07:23.468 "claim_type": "exclusive_write", 00:07:23.468 "zoned": false, 00:07:23.468 "supported_io_types": { 00:07:23.468 "read": true, 00:07:23.468 "write": true, 00:07:23.468 "unmap": true, 00:07:23.468 "flush": true, 00:07:23.468 "reset": true, 00:07:23.468 "nvme_admin": false, 00:07:23.468 "nvme_io": false, 00:07:23.468 "nvme_io_md": false, 00:07:23.468 "write_zeroes": true, 00:07:23.469 "zcopy": true, 00:07:23.469 "get_zone_info": false, 00:07:23.469 "zone_management": false, 00:07:23.469 "zone_append": false, 00:07:23.469 "compare": false, 00:07:23.469 "compare_and_write": false, 00:07:23.469 "abort": true, 00:07:23.469 "seek_hole": false, 00:07:23.469 "seek_data": false, 00:07:23.469 "copy": true, 00:07:23.469 "nvme_iov_md": false 00:07:23.469 }, 00:07:23.469 "memory_domains": [ 00:07:23.469 { 00:07:23.469 "dma_device_id": "system", 00:07:23.469 "dma_device_type": 1 00:07:23.469 }, 00:07:23.469 { 00:07:23.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.469 "dma_device_type": 2 00:07:23.469 } 00:07:23.469 ], 00:07:23.469 "driver_specific": {} 00:07:23.469 } 00:07:23.469 ] 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.469 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.730 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.730 "name": "Existed_Raid", 00:07:23.730 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:23.730 "strip_size_kb": 64, 00:07:23.730 "state": "online", 00:07:23.730 "raid_level": "raid0", 00:07:23.730 "superblock": true, 00:07:23.730 "num_base_bdevs": 3, 00:07:23.730 "num_base_bdevs_discovered": 3, 00:07:23.730 "num_base_bdevs_operational": 3, 00:07:23.730 "base_bdevs_list": [ 00:07:23.730 { 00:07:23.730 "name": "BaseBdev1", 00:07:23.730 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:23.730 "is_configured": true, 00:07:23.730 "data_offset": 2048, 00:07:23.730 "data_size": 63488 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "name": "BaseBdev2", 00:07:23.730 "uuid": "67da7746-5ca8-4aab-94e8-45760821bb67", 00:07:23.730 "is_configured": true, 00:07:23.730 "data_offset": 2048, 00:07:23.730 "data_size": 63488 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "name": "BaseBdev3", 00:07:23.730 "uuid": "7c49d98d-8741-4099-a76b-6aa1dd9590ef", 00:07:23.730 "is_configured": true, 00:07:23.730 "data_offset": 2048, 00:07:23.730 "data_size": 63488 00:07:23.730 } 00:07:23.731 ] 00:07:23.731 }' 00:07:23.731 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.731 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.993 [2024-10-30 09:42:02.381489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.993 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.993 "name": "Existed_Raid", 00:07:23.993 "aliases": [ 00:07:23.993 "3facbd44-c841-4d28-b21f-8e5b84f9b8a4" 00:07:23.993 ], 00:07:23.993 "product_name": "Raid Volume", 00:07:23.993 "block_size": 512, 00:07:23.993 "num_blocks": 190464, 00:07:23.993 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:23.993 "assigned_rate_limits": { 00:07:23.993 "rw_ios_per_sec": 0, 00:07:23.993 "rw_mbytes_per_sec": 0, 00:07:23.993 "r_mbytes_per_sec": 0, 00:07:23.993 "w_mbytes_per_sec": 0 00:07:23.993 }, 00:07:23.993 "claimed": false, 00:07:23.993 "zoned": false, 00:07:23.993 "supported_io_types": { 00:07:23.993 "read": true, 00:07:23.993 "write": true, 00:07:23.993 "unmap": true, 00:07:23.993 "flush": true, 00:07:23.993 "reset": true, 00:07:23.993 "nvme_admin": false, 00:07:23.993 "nvme_io": false, 00:07:23.993 "nvme_io_md": false, 00:07:23.993 "write_zeroes": true, 00:07:23.993 "zcopy": false, 00:07:23.993 "get_zone_info": false, 00:07:23.993 "zone_management": false, 00:07:23.993 "zone_append": false, 00:07:23.993 "compare": false, 00:07:23.993 "compare_and_write": false, 00:07:23.993 "abort": false, 00:07:23.993 "seek_hole": false, 00:07:23.993 "seek_data": false, 00:07:23.993 "copy": false, 00:07:23.993 "nvme_iov_md": false 00:07:23.993 }, 00:07:23.993 "memory_domains": [ 00:07:23.993 { 00:07:23.993 "dma_device_id": "system", 00:07:23.993 "dma_device_type": 1 00:07:23.993 }, 00:07:23.993 { 00:07:23.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.993 "dma_device_type": 2 00:07:23.993 }, 00:07:23.993 { 00:07:23.993 "dma_device_id": "system", 00:07:23.993 "dma_device_type": 1 00:07:23.993 }, 00:07:23.993 { 00:07:23.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.993 "dma_device_type": 2 00:07:23.993 }, 00:07:23.993 { 00:07:23.993 "dma_device_id": "system", 00:07:23.993 "dma_device_type": 1 00:07:23.993 }, 00:07:23.993 { 00:07:23.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.993 "dma_device_type": 2 00:07:23.994 } 00:07:23.994 ], 00:07:23.994 "driver_specific": { 00:07:23.994 "raid": { 00:07:23.994 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:23.994 "strip_size_kb": 64, 00:07:23.994 "state": "online", 00:07:23.994 "raid_level": "raid0", 00:07:23.994 "superblock": true, 00:07:23.994 "num_base_bdevs": 3, 00:07:23.994 "num_base_bdevs_discovered": 3, 00:07:23.994 "num_base_bdevs_operational": 3, 00:07:23.994 "base_bdevs_list": [ 00:07:23.994 { 00:07:23.994 "name": "BaseBdev1", 00:07:23.994 "uuid": "6a8326fb-7c41-419b-bbe5-13216acc5891", 00:07:23.994 "is_configured": true, 00:07:23.994 "data_offset": 2048, 00:07:23.994 "data_size": 63488 00:07:23.994 }, 00:07:23.994 { 00:07:23.994 "name": "BaseBdev2", 00:07:23.994 "uuid": "67da7746-5ca8-4aab-94e8-45760821bb67", 00:07:23.994 "is_configured": true, 00:07:23.994 "data_offset": 2048, 00:07:23.994 "data_size": 63488 00:07:23.994 }, 00:07:23.994 { 00:07:23.994 "name": "BaseBdev3", 00:07:23.994 "uuid": "7c49d98d-8741-4099-a76b-6aa1dd9590ef", 00:07:23.994 "is_configured": true, 00:07:23.994 "data_offset": 2048, 00:07:23.994 "data_size": 63488 00:07:23.994 } 00:07:23.994 ] 00:07:23.994 } 00:07:23.994 } 00:07:23.994 }' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.994 BaseBdev2 00:07:23.994 BaseBdev3' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.994 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 [2024-10-30 09:42:02.577255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.994 [2024-10-30 09:42:02.577370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.994 [2024-10-30 09:42:02.577433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.256 "name": "Existed_Raid", 00:07:24.256 "uuid": "3facbd44-c841-4d28-b21f-8e5b84f9b8a4", 00:07:24.256 "strip_size_kb": 64, 00:07:24.256 "state": "offline", 00:07:24.256 "raid_level": "raid0", 00:07:24.256 "superblock": true, 00:07:24.256 "num_base_bdevs": 3, 00:07:24.256 "num_base_bdevs_discovered": 2, 00:07:24.256 "num_base_bdevs_operational": 2, 00:07:24.256 "base_bdevs_list": [ 00:07:24.256 { 00:07:24.256 "name": null, 00:07:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.256 "is_configured": false, 00:07:24.256 "data_offset": 0, 00:07:24.256 "data_size": 63488 00:07:24.256 }, 00:07:24.256 { 00:07:24.256 "name": "BaseBdev2", 00:07:24.256 "uuid": "67da7746-5ca8-4aab-94e8-45760821bb67", 00:07:24.256 "is_configured": true, 00:07:24.256 "data_offset": 2048, 00:07:24.256 "data_size": 63488 00:07:24.256 }, 00:07:24.256 { 00:07:24.256 "name": "BaseBdev3", 00:07:24.256 "uuid": "7c49d98d-8741-4099-a76b-6aa1dd9590ef", 00:07:24.256 "is_configured": true, 00:07:24.256 "data_offset": 2048, 00:07:24.256 "data_size": 63488 00:07:24.256 } 00:07:24.256 ] 00:07:24.256 }' 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.256 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.518 09:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.518 [2024-10-30 09:42:02.979940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.518 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.519 [2024-10-30 09:42:03.070564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:24.519 [2024-10-30 09:42:03.070607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.519 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 BaseBdev2 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 [ 00:07:24.779 { 00:07:24.779 "name": "BaseBdev2", 00:07:24.779 "aliases": [ 00:07:24.779 "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c" 00:07:24.779 ], 00:07:24.779 "product_name": "Malloc disk", 00:07:24.779 "block_size": 512, 00:07:24.779 "num_blocks": 65536, 00:07:24.779 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:24.779 "assigned_rate_limits": { 00:07:24.779 "rw_ios_per_sec": 0, 00:07:24.779 "rw_mbytes_per_sec": 0, 00:07:24.779 "r_mbytes_per_sec": 0, 00:07:24.779 "w_mbytes_per_sec": 0 00:07:24.779 }, 00:07:24.779 "claimed": false, 00:07:24.779 "zoned": false, 00:07:24.779 "supported_io_types": { 00:07:24.779 "read": true, 00:07:24.779 "write": true, 00:07:24.779 "unmap": true, 00:07:24.779 "flush": true, 00:07:24.779 "reset": true, 00:07:24.779 "nvme_admin": false, 00:07:24.779 "nvme_io": false, 00:07:24.779 "nvme_io_md": false, 00:07:24.779 "write_zeroes": true, 00:07:24.779 "zcopy": true, 00:07:24.779 "get_zone_info": false, 00:07:24.779 "zone_management": false, 00:07:24.779 "zone_append": false, 00:07:24.779 "compare": false, 00:07:24.779 "compare_and_write": false, 00:07:24.779 "abort": true, 00:07:24.779 "seek_hole": false, 00:07:24.779 "seek_data": false, 00:07:24.779 "copy": true, 00:07:24.779 "nvme_iov_md": false 00:07:24.779 }, 00:07:24.779 "memory_domains": [ 00:07:24.779 { 00:07:24.779 "dma_device_id": "system", 00:07:24.779 "dma_device_type": 1 00:07:24.779 }, 00:07:24.779 { 00:07:24.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.779 "dma_device_type": 2 00:07:24.779 } 00:07:24.779 ], 00:07:24.779 "driver_specific": {} 00:07:24.779 } 00:07:24.779 ] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 BaseBdev3 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.779 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 [ 00:07:24.779 { 00:07:24.779 "name": "BaseBdev3", 00:07:24.779 "aliases": [ 00:07:24.779 "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae" 00:07:24.779 ], 00:07:24.779 "product_name": "Malloc disk", 00:07:24.779 "block_size": 512, 00:07:24.779 "num_blocks": 65536, 00:07:24.779 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:24.779 "assigned_rate_limits": { 00:07:24.779 "rw_ios_per_sec": 0, 00:07:24.779 "rw_mbytes_per_sec": 0, 00:07:24.779 "r_mbytes_per_sec": 0, 00:07:24.779 "w_mbytes_per_sec": 0 00:07:24.779 }, 00:07:24.779 "claimed": false, 00:07:24.779 "zoned": false, 00:07:24.779 "supported_io_types": { 00:07:24.779 "read": true, 00:07:24.779 "write": true, 00:07:24.779 "unmap": true, 00:07:24.779 "flush": true, 00:07:24.779 "reset": true, 00:07:24.779 "nvme_admin": false, 00:07:24.779 "nvme_io": false, 00:07:24.779 "nvme_io_md": false, 00:07:24.779 "write_zeroes": true, 00:07:24.779 "zcopy": true, 00:07:24.779 "get_zone_info": false, 00:07:24.779 "zone_management": false, 00:07:24.779 "zone_append": false, 00:07:24.779 "compare": false, 00:07:24.779 "compare_and_write": false, 00:07:24.779 "abort": true, 00:07:24.779 "seek_hole": false, 00:07:24.779 "seek_data": false, 00:07:24.779 "copy": true, 00:07:24.779 "nvme_iov_md": false 00:07:24.779 }, 00:07:24.779 "memory_domains": [ 00:07:24.779 { 00:07:24.779 "dma_device_id": "system", 00:07:24.779 "dma_device_type": 1 00:07:24.779 }, 00:07:24.779 { 00:07:24.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.779 "dma_device_type": 2 00:07:24.779 } 00:07:24.779 ], 00:07:24.779 "driver_specific": {} 00:07:24.779 } 00:07:24.779 ] 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.780 [2024-10-30 09:42:03.282260] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.780 [2024-10-30 09:42:03.282396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.780 [2024-10-30 09:42:03.282470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.780 [2024-10-30 09:42:03.284315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.780 "name": "Existed_Raid", 00:07:24.780 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:24.780 "strip_size_kb": 64, 00:07:24.780 "state": "configuring", 00:07:24.780 "raid_level": "raid0", 00:07:24.780 "superblock": true, 00:07:24.780 "num_base_bdevs": 3, 00:07:24.780 "num_base_bdevs_discovered": 2, 00:07:24.780 "num_base_bdevs_operational": 3, 00:07:24.780 "base_bdevs_list": [ 00:07:24.780 { 00:07:24.780 "name": "BaseBdev1", 00:07:24.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.780 "is_configured": false, 00:07:24.780 "data_offset": 0, 00:07:24.780 "data_size": 0 00:07:24.780 }, 00:07:24.780 { 00:07:24.780 "name": "BaseBdev2", 00:07:24.780 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:24.780 "is_configured": true, 00:07:24.780 "data_offset": 2048, 00:07:24.780 "data_size": 63488 00:07:24.780 }, 00:07:24.780 { 00:07:24.780 "name": "BaseBdev3", 00:07:24.780 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:24.780 "is_configured": true, 00:07:24.780 "data_offset": 2048, 00:07:24.780 "data_size": 63488 00:07:24.780 } 00:07:24.780 ] 00:07:24.780 }' 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.780 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.040 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:25.040 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.040 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.040 [2024-10-30 09:42:03.590305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.040 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.040 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.041 "name": "Existed_Raid", 00:07:25.041 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:25.041 "strip_size_kb": 64, 00:07:25.041 "state": "configuring", 00:07:25.041 "raid_level": "raid0", 00:07:25.041 "superblock": true, 00:07:25.041 "num_base_bdevs": 3, 00:07:25.041 "num_base_bdevs_discovered": 1, 00:07:25.041 "num_base_bdevs_operational": 3, 00:07:25.041 "base_bdevs_list": [ 00:07:25.041 { 00:07:25.041 "name": "BaseBdev1", 00:07:25.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.041 "is_configured": false, 00:07:25.041 "data_offset": 0, 00:07:25.041 "data_size": 0 00:07:25.041 }, 00:07:25.041 { 00:07:25.041 "name": null, 00:07:25.041 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:25.041 "is_configured": false, 00:07:25.041 "data_offset": 0, 00:07:25.041 "data_size": 63488 00:07:25.041 }, 00:07:25.041 { 00:07:25.041 "name": "BaseBdev3", 00:07:25.041 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:25.041 "is_configured": true, 00:07:25.041 "data_offset": 2048, 00:07:25.041 "data_size": 63488 00:07:25.041 } 00:07:25.041 ] 00:07:25.041 }' 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.041 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.301 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.301 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.301 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.301 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:25.301 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 [2024-10-30 09:42:03.948756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.563 BaseBdev1 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.563 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.563 [ 00:07:25.563 { 00:07:25.563 "name": "BaseBdev1", 00:07:25.563 "aliases": [ 00:07:25.563 "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a" 00:07:25.563 ], 00:07:25.563 "product_name": "Malloc disk", 00:07:25.563 "block_size": 512, 00:07:25.563 "num_blocks": 65536, 00:07:25.563 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:25.563 "assigned_rate_limits": { 00:07:25.563 "rw_ios_per_sec": 0, 00:07:25.563 "rw_mbytes_per_sec": 0, 00:07:25.563 "r_mbytes_per_sec": 0, 00:07:25.563 "w_mbytes_per_sec": 0 00:07:25.563 }, 00:07:25.563 "claimed": true, 00:07:25.563 "claim_type": "exclusive_write", 00:07:25.563 "zoned": false, 00:07:25.563 "supported_io_types": { 00:07:25.563 "read": true, 00:07:25.563 "write": true, 00:07:25.563 "unmap": true, 00:07:25.563 "flush": true, 00:07:25.563 "reset": true, 00:07:25.563 "nvme_admin": false, 00:07:25.563 "nvme_io": false, 00:07:25.563 "nvme_io_md": false, 00:07:25.563 "write_zeroes": true, 00:07:25.563 "zcopy": true, 00:07:25.563 "get_zone_info": false, 00:07:25.563 "zone_management": false, 00:07:25.563 "zone_append": false, 00:07:25.563 "compare": false, 00:07:25.563 "compare_and_write": false, 00:07:25.563 "abort": true, 00:07:25.563 "seek_hole": false, 00:07:25.563 "seek_data": false, 00:07:25.563 "copy": true, 00:07:25.563 "nvme_iov_md": false 00:07:25.563 }, 00:07:25.564 "memory_domains": [ 00:07:25.564 { 00:07:25.564 "dma_device_id": "system", 00:07:25.564 "dma_device_type": 1 00:07:25.564 }, 00:07:25.564 { 00:07:25.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.564 "dma_device_type": 2 00:07:25.564 } 00:07:25.564 ], 00:07:25.564 "driver_specific": {} 00:07:25.564 } 00:07:25.564 ] 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.564 09:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.564 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.564 "name": "Existed_Raid", 00:07:25.564 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:25.564 "strip_size_kb": 64, 00:07:25.564 "state": "configuring", 00:07:25.564 "raid_level": "raid0", 00:07:25.564 "superblock": true, 00:07:25.564 "num_base_bdevs": 3, 00:07:25.564 "num_base_bdevs_discovered": 2, 00:07:25.564 "num_base_bdevs_operational": 3, 00:07:25.564 "base_bdevs_list": [ 00:07:25.564 { 00:07:25.564 "name": "BaseBdev1", 00:07:25.564 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:25.564 "is_configured": true, 00:07:25.564 "data_offset": 2048, 00:07:25.564 "data_size": 63488 00:07:25.564 }, 00:07:25.564 { 00:07:25.564 "name": null, 00:07:25.564 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:25.564 "is_configured": false, 00:07:25.564 "data_offset": 0, 00:07:25.564 "data_size": 63488 00:07:25.564 }, 00:07:25.564 { 00:07:25.564 "name": "BaseBdev3", 00:07:25.564 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:25.564 "is_configured": true, 00:07:25.564 "data_offset": 2048, 00:07:25.564 "data_size": 63488 00:07:25.564 } 00:07:25.564 ] 00:07:25.564 }' 00:07:25.564 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.564 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.826 [2024-10-30 09:42:04.316880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.826 "name": "Existed_Raid", 00:07:25.826 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:25.826 "strip_size_kb": 64, 00:07:25.826 "state": "configuring", 00:07:25.826 "raid_level": "raid0", 00:07:25.826 "superblock": true, 00:07:25.826 "num_base_bdevs": 3, 00:07:25.826 "num_base_bdevs_discovered": 1, 00:07:25.826 "num_base_bdevs_operational": 3, 00:07:25.826 "base_bdevs_list": [ 00:07:25.826 { 00:07:25.826 "name": "BaseBdev1", 00:07:25.826 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:25.826 "is_configured": true, 00:07:25.826 "data_offset": 2048, 00:07:25.826 "data_size": 63488 00:07:25.826 }, 00:07:25.826 { 00:07:25.826 "name": null, 00:07:25.826 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:25.826 "is_configured": false, 00:07:25.826 "data_offset": 0, 00:07:25.826 "data_size": 63488 00:07:25.826 }, 00:07:25.826 { 00:07:25.826 "name": null, 00:07:25.826 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:25.826 "is_configured": false, 00:07:25.826 "data_offset": 0, 00:07:25.826 "data_size": 63488 00:07:25.826 } 00:07:25.826 ] 00:07:25.826 }' 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.826 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 [2024-10-30 09:42:04.649011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.182 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.182 "name": "Existed_Raid", 00:07:26.182 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:26.182 "strip_size_kb": 64, 00:07:26.182 "state": "configuring", 00:07:26.182 "raid_level": "raid0", 00:07:26.182 "superblock": true, 00:07:26.182 "num_base_bdevs": 3, 00:07:26.182 "num_base_bdevs_discovered": 2, 00:07:26.182 "num_base_bdevs_operational": 3, 00:07:26.182 "base_bdevs_list": [ 00:07:26.182 { 00:07:26.182 "name": "BaseBdev1", 00:07:26.182 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:26.182 "is_configured": true, 00:07:26.182 "data_offset": 2048, 00:07:26.182 "data_size": 63488 00:07:26.182 }, 00:07:26.182 { 00:07:26.182 "name": null, 00:07:26.182 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:26.182 "is_configured": false, 00:07:26.182 "data_offset": 0, 00:07:26.182 "data_size": 63488 00:07:26.182 }, 00:07:26.182 { 00:07:26.182 "name": "BaseBdev3", 00:07:26.182 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:26.182 "is_configured": true, 00:07:26.182 "data_offset": 2048, 00:07:26.182 "data_size": 63488 00:07:26.182 } 00:07:26.182 ] 00:07:26.182 }' 00:07:26.183 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.183 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.446 09:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 [2024-10-30 09:42:05.001112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.446 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.708 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.708 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.708 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.708 "name": "Existed_Raid", 00:07:26.708 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:26.708 "strip_size_kb": 64, 00:07:26.708 "state": "configuring", 00:07:26.708 "raid_level": "raid0", 00:07:26.708 "superblock": true, 00:07:26.708 "num_base_bdevs": 3, 00:07:26.708 "num_base_bdevs_discovered": 1, 00:07:26.708 "num_base_bdevs_operational": 3, 00:07:26.708 "base_bdevs_list": [ 00:07:26.708 { 00:07:26.708 "name": null, 00:07:26.708 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:26.708 "is_configured": false, 00:07:26.708 "data_offset": 0, 00:07:26.708 "data_size": 63488 00:07:26.708 }, 00:07:26.708 { 00:07:26.708 "name": null, 00:07:26.708 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:26.708 "is_configured": false, 00:07:26.708 "data_offset": 0, 00:07:26.708 "data_size": 63488 00:07:26.708 }, 00:07:26.708 { 00:07:26.708 "name": "BaseBdev3", 00:07:26.708 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:26.708 "is_configured": true, 00:07:26.708 "data_offset": 2048, 00:07:26.708 "data_size": 63488 00:07:26.708 } 00:07:26.708 ] 00:07:26.708 }' 00:07:26.708 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.708 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 [2024-10-30 09:42:05.416378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.970 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.970 "name": "Existed_Raid", 00:07:26.970 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:26.970 "strip_size_kb": 64, 00:07:26.970 "state": "configuring", 00:07:26.970 "raid_level": "raid0", 00:07:26.970 "superblock": true, 00:07:26.970 "num_base_bdevs": 3, 00:07:26.970 "num_base_bdevs_discovered": 2, 00:07:26.970 "num_base_bdevs_operational": 3, 00:07:26.970 "base_bdevs_list": [ 00:07:26.970 { 00:07:26.970 "name": null, 00:07:26.970 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:26.970 "is_configured": false, 00:07:26.970 "data_offset": 0, 00:07:26.970 "data_size": 63488 00:07:26.970 }, 00:07:26.970 { 00:07:26.970 "name": "BaseBdev2", 00:07:26.970 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:26.970 "is_configured": true, 00:07:26.970 "data_offset": 2048, 00:07:26.970 "data_size": 63488 00:07:26.970 }, 00:07:26.970 { 00:07:26.971 "name": "BaseBdev3", 00:07:26.971 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:26.971 "is_configured": true, 00:07:26.971 "data_offset": 2048, 00:07:26.971 "data_size": 63488 00:07:26.971 } 00:07:26.971 ] 00:07:26.971 }' 00:07:26.971 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.971 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5e34bfa-ee11-497e-8adb-9bc9f661bb1a 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 [2024-10-30 09:42:05.819007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:27.232 [2024-10-30 09:42:05.819208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:27.232 [2024-10-30 09:42:05.819224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:27.232 [2024-10-30 09:42:05.819462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:27.232 [2024-10-30 09:42:05.819581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:27.232 [2024-10-30 09:42:05.819589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:27.232 [2024-10-30 09:42:05.819704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.232 NewBaseBdev 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.232 [ 00:07:27.232 { 00:07:27.232 "name": "NewBaseBdev", 00:07:27.232 "aliases": [ 00:07:27.232 "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a" 00:07:27.232 ], 00:07:27.232 "product_name": "Malloc disk", 00:07:27.232 "block_size": 512, 00:07:27.232 "num_blocks": 65536, 00:07:27.232 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:27.232 "assigned_rate_limits": { 00:07:27.232 "rw_ios_per_sec": 0, 00:07:27.232 "rw_mbytes_per_sec": 0, 00:07:27.232 "r_mbytes_per_sec": 0, 00:07:27.232 "w_mbytes_per_sec": 0 00:07:27.232 }, 00:07:27.232 "claimed": true, 00:07:27.232 "claim_type": "exclusive_write", 00:07:27.232 "zoned": false, 00:07:27.232 "supported_io_types": { 00:07:27.232 "read": true, 00:07:27.232 "write": true, 00:07:27.232 "unmap": true, 00:07:27.232 "flush": true, 00:07:27.232 "reset": true, 00:07:27.232 "nvme_admin": false, 00:07:27.232 "nvme_io": false, 00:07:27.232 "nvme_io_md": false, 00:07:27.232 "write_zeroes": true, 00:07:27.232 "zcopy": true, 00:07:27.232 "get_zone_info": false, 00:07:27.232 "zone_management": false, 00:07:27.232 "zone_append": false, 00:07:27.232 "compare": false, 00:07:27.232 "compare_and_write": false, 00:07:27.232 "abort": true, 00:07:27.232 "seek_hole": false, 00:07:27.232 "seek_data": false, 00:07:27.232 "copy": true, 00:07:27.232 "nvme_iov_md": false 00:07:27.232 }, 00:07:27.232 "memory_domains": [ 00:07:27.232 { 00:07:27.232 "dma_device_id": "system", 00:07:27.232 "dma_device_type": 1 00:07:27.232 }, 00:07:27.232 { 00:07:27.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.232 "dma_device_type": 2 00:07:27.232 } 00:07:27.232 ], 00:07:27.232 "driver_specific": {} 00:07:27.232 } 00:07:27.232 ] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.232 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.493 "name": "Existed_Raid", 00:07:27.493 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:27.493 "strip_size_kb": 64, 00:07:27.493 "state": "online", 00:07:27.493 "raid_level": "raid0", 00:07:27.493 "superblock": true, 00:07:27.493 "num_base_bdevs": 3, 00:07:27.493 "num_base_bdevs_discovered": 3, 00:07:27.493 "num_base_bdevs_operational": 3, 00:07:27.493 "base_bdevs_list": [ 00:07:27.493 { 00:07:27.493 "name": "NewBaseBdev", 00:07:27.493 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:27.493 "is_configured": true, 00:07:27.493 "data_offset": 2048, 00:07:27.493 "data_size": 63488 00:07:27.493 }, 00:07:27.493 { 00:07:27.493 "name": "BaseBdev2", 00:07:27.493 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:27.493 "is_configured": true, 00:07:27.493 "data_offset": 2048, 00:07:27.493 "data_size": 63488 00:07:27.493 }, 00:07:27.493 { 00:07:27.493 "name": "BaseBdev3", 00:07:27.493 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:27.493 "is_configured": true, 00:07:27.493 "data_offset": 2048, 00:07:27.493 "data_size": 63488 00:07:27.493 } 00:07:27.493 ] 00:07:27.493 }' 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.493 09:42:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.756 [2024-10-30 09:42:06.163472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.756 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.756 "name": "Existed_Raid", 00:07:27.756 "aliases": [ 00:07:27.756 "7cbde2fc-ac8c-407c-a123-a16b095a36e7" 00:07:27.756 ], 00:07:27.756 "product_name": "Raid Volume", 00:07:27.756 "block_size": 512, 00:07:27.756 "num_blocks": 190464, 00:07:27.756 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:27.756 "assigned_rate_limits": { 00:07:27.757 "rw_ios_per_sec": 0, 00:07:27.757 "rw_mbytes_per_sec": 0, 00:07:27.757 "r_mbytes_per_sec": 0, 00:07:27.757 "w_mbytes_per_sec": 0 00:07:27.757 }, 00:07:27.757 "claimed": false, 00:07:27.757 "zoned": false, 00:07:27.757 "supported_io_types": { 00:07:27.757 "read": true, 00:07:27.757 "write": true, 00:07:27.757 "unmap": true, 00:07:27.757 "flush": true, 00:07:27.757 "reset": true, 00:07:27.757 "nvme_admin": false, 00:07:27.757 "nvme_io": false, 00:07:27.757 "nvme_io_md": false, 00:07:27.757 "write_zeroes": true, 00:07:27.757 "zcopy": false, 00:07:27.757 "get_zone_info": false, 00:07:27.757 "zone_management": false, 00:07:27.757 "zone_append": false, 00:07:27.757 "compare": false, 00:07:27.757 "compare_and_write": false, 00:07:27.757 "abort": false, 00:07:27.757 "seek_hole": false, 00:07:27.757 "seek_data": false, 00:07:27.757 "copy": false, 00:07:27.757 "nvme_iov_md": false 00:07:27.757 }, 00:07:27.757 "memory_domains": [ 00:07:27.757 { 00:07:27.757 "dma_device_id": "system", 00:07:27.757 "dma_device_type": 1 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.757 "dma_device_type": 2 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "dma_device_id": "system", 00:07:27.757 "dma_device_type": 1 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.757 "dma_device_type": 2 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "dma_device_id": "system", 00:07:27.757 "dma_device_type": 1 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.757 "dma_device_type": 2 00:07:27.757 } 00:07:27.757 ], 00:07:27.757 "driver_specific": { 00:07:27.757 "raid": { 00:07:27.757 "uuid": "7cbde2fc-ac8c-407c-a123-a16b095a36e7", 00:07:27.757 "strip_size_kb": 64, 00:07:27.757 "state": "online", 00:07:27.757 "raid_level": "raid0", 00:07:27.757 "superblock": true, 00:07:27.757 "num_base_bdevs": 3, 00:07:27.757 "num_base_bdevs_discovered": 3, 00:07:27.757 "num_base_bdevs_operational": 3, 00:07:27.757 "base_bdevs_list": [ 00:07:27.757 { 00:07:27.757 "name": "NewBaseBdev", 00:07:27.757 "uuid": "f5e34bfa-ee11-497e-8adb-9bc9f661bb1a", 00:07:27.757 "is_configured": true, 00:07:27.757 "data_offset": 2048, 00:07:27.757 "data_size": 63488 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "name": "BaseBdev2", 00:07:27.757 "uuid": "6ca37dc8-ceff-4d03-89fb-359cd8a3f11c", 00:07:27.757 "is_configured": true, 00:07:27.757 "data_offset": 2048, 00:07:27.757 "data_size": 63488 00:07:27.757 }, 00:07:27.757 { 00:07:27.757 "name": "BaseBdev3", 00:07:27.757 "uuid": "fa07e3b1-3b2d-4ef9-ab44-873231fd77ae", 00:07:27.757 "is_configured": true, 00:07:27.757 "data_offset": 2048, 00:07:27.757 "data_size": 63488 00:07:27.757 } 00:07:27.757 ] 00:07:27.757 } 00:07:27.757 } 00:07:27.757 }' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:27.757 BaseBdev2 00:07:27.757 BaseBdev3' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 [2024-10-30 09:42:06.367202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.757 [2024-10-30 09:42:06.367229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.757 [2024-10-30 09:42:06.367295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.757 [2024-10-30 09:42:06.367351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.757 [2024-10-30 09:42:06.367363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63094 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63094 ']' 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63094 00:07:27.757 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63094 00:07:28.020 killing process with pid 63094 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63094' 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63094 00:07:28.020 [2024-10-30 09:42:06.398873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.020 09:42:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63094 00:07:28.020 [2024-10-30 09:42:06.585834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.964 ************************************ 00:07:28.964 END TEST raid_state_function_test_sb 00:07:28.964 ************************************ 00:07:28.964 09:42:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:28.964 00:07:28.964 real 0m7.638s 00:07:28.964 user 0m12.189s 00:07:28.964 sys 0m1.212s 00:07:28.964 09:42:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.964 09:42:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 09:42:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:28.964 09:42:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:28.964 09:42:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.964 09:42:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 ************************************ 00:07:28.964 START TEST raid_superblock_test 00:07:28.964 ************************************ 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63681 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63681 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63681 ']' 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.964 09:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 [2024-10-30 09:42:07.419371] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:28.964 [2024-10-30 09:42:07.419495] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63681 ] 00:07:28.964 [2024-10-30 09:42:07.579440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.225 [2024-10-30 09:42:07.680813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.225 [2024-10-30 09:42:07.816429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.225 [2024-10-30 09:42:07.816468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.832 malloc1 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.832 [2024-10-30 09:42:08.309422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.832 [2024-10-30 09:42:08.309482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.832 [2024-10-30 09:42:08.309503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:29.832 [2024-10-30 09:42:08.309512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.832 [2024-10-30 09:42:08.311649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.832 [2024-10-30 09:42:08.311686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.832 pt1 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.832 malloc2 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.832 [2024-10-30 09:42:08.349420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.832 [2024-10-30 09:42:08.349467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.832 [2024-10-30 09:42:08.349487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:29.832 [2024-10-30 09:42:08.349496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.832 [2024-10-30 09:42:08.351584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.832 [2024-10-30 09:42:08.351616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.832 pt2 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.832 malloc3 00:07:29.832 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.833 [2024-10-30 09:42:08.404285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:29.833 [2024-10-30 09:42:08.404333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.833 [2024-10-30 09:42:08.404355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:29.833 [2024-10-30 09:42:08.404365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.833 [2024-10-30 09:42:08.406469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.833 [2024-10-30 09:42:08.406503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:29.833 pt3 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.833 [2024-10-30 09:42:08.412335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.833 [2024-10-30 09:42:08.414159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.833 [2024-10-30 09:42:08.414223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:29.833 [2024-10-30 09:42:08.414371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.833 [2024-10-30 09:42:08.414384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:29.833 [2024-10-30 09:42:08.414631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:29.833 [2024-10-30 09:42:08.414775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.833 [2024-10-30 09:42:08.414784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:29.833 [2024-10-30 09:42:08.414912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.833 "name": "raid_bdev1", 00:07:29.833 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:29.833 "strip_size_kb": 64, 00:07:29.833 "state": "online", 00:07:29.833 "raid_level": "raid0", 00:07:29.833 "superblock": true, 00:07:29.833 "num_base_bdevs": 3, 00:07:29.833 "num_base_bdevs_discovered": 3, 00:07:29.833 "num_base_bdevs_operational": 3, 00:07:29.833 "base_bdevs_list": [ 00:07:29.833 { 00:07:29.833 "name": "pt1", 00:07:29.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.833 "is_configured": true, 00:07:29.833 "data_offset": 2048, 00:07:29.833 "data_size": 63488 00:07:29.833 }, 00:07:29.833 { 00:07:29.833 "name": "pt2", 00:07:29.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.833 "is_configured": true, 00:07:29.833 "data_offset": 2048, 00:07:29.833 "data_size": 63488 00:07:29.833 }, 00:07:29.833 { 00:07:29.833 "name": "pt3", 00:07:29.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:29.833 "is_configured": true, 00:07:29.833 "data_offset": 2048, 00:07:29.833 "data_size": 63488 00:07:29.833 } 00:07:29.833 ] 00:07:29.833 }' 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.833 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 [2024-10-30 09:42:08.740740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.407 "name": "raid_bdev1", 00:07:30.407 "aliases": [ 00:07:30.407 "0523d5ed-2aab-411c-a116-add7b1405c8a" 00:07:30.407 ], 00:07:30.407 "product_name": "Raid Volume", 00:07:30.407 "block_size": 512, 00:07:30.407 "num_blocks": 190464, 00:07:30.407 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:30.407 "assigned_rate_limits": { 00:07:30.407 "rw_ios_per_sec": 0, 00:07:30.407 "rw_mbytes_per_sec": 0, 00:07:30.407 "r_mbytes_per_sec": 0, 00:07:30.407 "w_mbytes_per_sec": 0 00:07:30.407 }, 00:07:30.407 "claimed": false, 00:07:30.407 "zoned": false, 00:07:30.407 "supported_io_types": { 00:07:30.407 "read": true, 00:07:30.407 "write": true, 00:07:30.407 "unmap": true, 00:07:30.407 "flush": true, 00:07:30.407 "reset": true, 00:07:30.407 "nvme_admin": false, 00:07:30.407 "nvme_io": false, 00:07:30.407 "nvme_io_md": false, 00:07:30.407 "write_zeroes": true, 00:07:30.407 "zcopy": false, 00:07:30.407 "get_zone_info": false, 00:07:30.407 "zone_management": false, 00:07:30.407 "zone_append": false, 00:07:30.407 "compare": false, 00:07:30.407 "compare_and_write": false, 00:07:30.407 "abort": false, 00:07:30.407 "seek_hole": false, 00:07:30.407 "seek_data": false, 00:07:30.407 "copy": false, 00:07:30.407 "nvme_iov_md": false 00:07:30.407 }, 00:07:30.407 "memory_domains": [ 00:07:30.407 { 00:07:30.407 "dma_device_id": "system", 00:07:30.407 "dma_device_type": 1 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.407 "dma_device_type": 2 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "dma_device_id": "system", 00:07:30.407 "dma_device_type": 1 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.407 "dma_device_type": 2 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "dma_device_id": "system", 00:07:30.407 "dma_device_type": 1 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.407 "dma_device_type": 2 00:07:30.407 } 00:07:30.407 ], 00:07:30.407 "driver_specific": { 00:07:30.407 "raid": { 00:07:30.407 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:30.407 "strip_size_kb": 64, 00:07:30.407 "state": "online", 00:07:30.407 "raid_level": "raid0", 00:07:30.407 "superblock": true, 00:07:30.407 "num_base_bdevs": 3, 00:07:30.407 "num_base_bdevs_discovered": 3, 00:07:30.407 "num_base_bdevs_operational": 3, 00:07:30.407 "base_bdevs_list": [ 00:07:30.407 { 00:07:30.407 "name": "pt1", 00:07:30.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.407 "is_configured": true, 00:07:30.407 "data_offset": 2048, 00:07:30.407 "data_size": 63488 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "name": "pt2", 00:07:30.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.407 "is_configured": true, 00:07:30.407 "data_offset": 2048, 00:07:30.407 "data_size": 63488 00:07:30.407 }, 00:07:30.407 { 00:07:30.407 "name": "pt3", 00:07:30.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.407 "is_configured": true, 00:07:30.407 "data_offset": 2048, 00:07:30.407 "data_size": 63488 00:07:30.407 } 00:07:30.407 ] 00:07:30.407 } 00:07:30.407 } 00:07:30.407 }' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:30.407 pt2 00:07:30.407 pt3' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:30.407 [2024-10-30 09:42:08.932764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0523d5ed-2aab-411c-a116-add7b1405c8a 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0523d5ed-2aab-411c-a116-add7b1405c8a ']' 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 [2024-10-30 09:42:08.960414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.407 [2024-10-30 09:42:08.960439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.407 [2024-10-30 09:42:08.960503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.407 [2024-10-30 09:42:08.960582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.407 [2024-10-30 09:42:08.960592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.407 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.408 09:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.408 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:30.669 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.669 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:30.669 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.670 [2024-10-30 09:42:09.060485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:30.670 [2024-10-30 09:42:09.062386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:30.670 [2024-10-30 09:42:09.062439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:30.670 [2024-10-30 09:42:09.062485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:30.670 [2024-10-30 09:42:09.062530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:30.670 [2024-10-30 09:42:09.062550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:30.670 [2024-10-30 09:42:09.062567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.670 [2024-10-30 09:42:09.062578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:30.670 request: 00:07:30.670 { 00:07:30.670 "name": "raid_bdev1", 00:07:30.670 "raid_level": "raid0", 00:07:30.670 "base_bdevs": [ 00:07:30.670 "malloc1", 00:07:30.670 "malloc2", 00:07:30.670 "malloc3" 00:07:30.670 ], 00:07:30.670 "strip_size_kb": 64, 00:07:30.670 "superblock": false, 00:07:30.670 "method": "bdev_raid_create", 00:07:30.670 "req_id": 1 00:07:30.670 } 00:07:30.670 Got JSON-RPC error response 00:07:30.670 response: 00:07:30.670 { 00:07:30.670 "code": -17, 00:07:30.670 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:30.670 } 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.670 [2024-10-30 09:42:09.104452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.670 [2024-10-30 09:42:09.104492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.670 [2024-10-30 09:42:09.104509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:30.670 [2024-10-30 09:42:09.104533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.670 [2024-10-30 09:42:09.106689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.670 [2024-10-30 09:42:09.106722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.670 [2024-10-30 09:42:09.106791] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:30.670 [2024-10-30 09:42:09.106836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.670 pt1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.670 "name": "raid_bdev1", 00:07:30.670 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:30.670 "strip_size_kb": 64, 00:07:30.670 "state": "configuring", 00:07:30.670 "raid_level": "raid0", 00:07:30.670 "superblock": true, 00:07:30.670 "num_base_bdevs": 3, 00:07:30.670 "num_base_bdevs_discovered": 1, 00:07:30.670 "num_base_bdevs_operational": 3, 00:07:30.670 "base_bdevs_list": [ 00:07:30.670 { 00:07:30.670 "name": "pt1", 00:07:30.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.670 "is_configured": true, 00:07:30.670 "data_offset": 2048, 00:07:30.670 "data_size": 63488 00:07:30.670 }, 00:07:30.670 { 00:07:30.670 "name": null, 00:07:30.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.670 "is_configured": false, 00:07:30.670 "data_offset": 2048, 00:07:30.670 "data_size": 63488 00:07:30.670 }, 00:07:30.670 { 00:07:30.670 "name": null, 00:07:30.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.670 "is_configured": false, 00:07:30.670 "data_offset": 2048, 00:07:30.670 "data_size": 63488 00:07:30.670 } 00:07:30.670 ] 00:07:30.670 }' 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.670 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.932 [2024-10-30 09:42:09.432574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.932 [2024-10-30 09:42:09.432644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.932 [2024-10-30 09:42:09.432665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:30.932 [2024-10-30 09:42:09.432674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.932 [2024-10-30 09:42:09.433081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.932 [2024-10-30 09:42:09.433105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.932 [2024-10-30 09:42:09.433182] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:30.932 [2024-10-30 09:42:09.433201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.932 pt2 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.932 [2024-10-30 09:42:09.440578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.932 "name": "raid_bdev1", 00:07:30.932 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:30.932 "strip_size_kb": 64, 00:07:30.932 "state": "configuring", 00:07:30.932 "raid_level": "raid0", 00:07:30.932 "superblock": true, 00:07:30.932 "num_base_bdevs": 3, 00:07:30.932 "num_base_bdevs_discovered": 1, 00:07:30.932 "num_base_bdevs_operational": 3, 00:07:30.932 "base_bdevs_list": [ 00:07:30.932 { 00:07:30.932 "name": "pt1", 00:07:30.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.932 "is_configured": true, 00:07:30.932 "data_offset": 2048, 00:07:30.932 "data_size": 63488 00:07:30.932 }, 00:07:30.932 { 00:07:30.932 "name": null, 00:07:30.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.932 "is_configured": false, 00:07:30.932 "data_offset": 0, 00:07:30.932 "data_size": 63488 00:07:30.932 }, 00:07:30.932 { 00:07:30.932 "name": null, 00:07:30.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.932 "is_configured": false, 00:07:30.932 "data_offset": 2048, 00:07:30.932 "data_size": 63488 00:07:30.932 } 00:07:30.932 ] 00:07:30.932 }' 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.932 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 [2024-10-30 09:42:09.772654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.194 [2024-10-30 09:42:09.772718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.194 [2024-10-30 09:42:09.772733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:31.194 [2024-10-30 09:42:09.772743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.194 [2024-10-30 09:42:09.773154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.194 [2024-10-30 09:42:09.773207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.194 [2024-10-30 09:42:09.773277] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.194 [2024-10-30 09:42:09.773304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.194 pt2 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.194 [2024-10-30 09:42:09.780618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:31.194 [2024-10-30 09:42:09.780657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.194 [2024-10-30 09:42:09.780669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:31.194 [2024-10-30 09:42:09.780678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.194 [2024-10-30 09:42:09.781029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.194 [2024-10-30 09:42:09.781046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:31.194 [2024-10-30 09:42:09.781111] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:31.194 [2024-10-30 09:42:09.781129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:31.194 [2024-10-30 09:42:09.781235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.194 [2024-10-30 09:42:09.781252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:31.194 [2024-10-30 09:42:09.781488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:31.194 [2024-10-30 09:42:09.781612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.194 [2024-10-30 09:42:09.781620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.194 [2024-10-30 09:42:09.781738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.194 pt3 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.194 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.195 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.456 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.456 "name": "raid_bdev1", 00:07:31.456 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:31.456 "strip_size_kb": 64, 00:07:31.456 "state": "online", 00:07:31.456 "raid_level": "raid0", 00:07:31.456 "superblock": true, 00:07:31.456 "num_base_bdevs": 3, 00:07:31.456 "num_base_bdevs_discovered": 3, 00:07:31.456 "num_base_bdevs_operational": 3, 00:07:31.456 "base_bdevs_list": [ 00:07:31.456 { 00:07:31.456 "name": "pt1", 00:07:31.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.456 "is_configured": true, 00:07:31.456 "data_offset": 2048, 00:07:31.456 "data_size": 63488 00:07:31.456 }, 00:07:31.456 { 00:07:31.456 "name": "pt2", 00:07:31.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.456 "is_configured": true, 00:07:31.456 "data_offset": 2048, 00:07:31.456 "data_size": 63488 00:07:31.456 }, 00:07:31.456 { 00:07:31.456 "name": "pt3", 00:07:31.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:31.456 "is_configured": true, 00:07:31.456 "data_offset": 2048, 00:07:31.456 "data_size": 63488 00:07:31.456 } 00:07:31.456 ] 00:07:31.456 }' 00:07:31.456 09:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.456 09:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.718 [2024-10-30 09:42:10.125053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.718 "name": "raid_bdev1", 00:07:31.718 "aliases": [ 00:07:31.718 "0523d5ed-2aab-411c-a116-add7b1405c8a" 00:07:31.718 ], 00:07:31.718 "product_name": "Raid Volume", 00:07:31.718 "block_size": 512, 00:07:31.718 "num_blocks": 190464, 00:07:31.718 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:31.718 "assigned_rate_limits": { 00:07:31.718 "rw_ios_per_sec": 0, 00:07:31.718 "rw_mbytes_per_sec": 0, 00:07:31.718 "r_mbytes_per_sec": 0, 00:07:31.718 "w_mbytes_per_sec": 0 00:07:31.718 }, 00:07:31.718 "claimed": false, 00:07:31.718 "zoned": false, 00:07:31.718 "supported_io_types": { 00:07:31.718 "read": true, 00:07:31.718 "write": true, 00:07:31.718 "unmap": true, 00:07:31.718 "flush": true, 00:07:31.718 "reset": true, 00:07:31.718 "nvme_admin": false, 00:07:31.718 "nvme_io": false, 00:07:31.718 "nvme_io_md": false, 00:07:31.718 "write_zeroes": true, 00:07:31.718 "zcopy": false, 00:07:31.718 "get_zone_info": false, 00:07:31.718 "zone_management": false, 00:07:31.718 "zone_append": false, 00:07:31.718 "compare": false, 00:07:31.718 "compare_and_write": false, 00:07:31.718 "abort": false, 00:07:31.718 "seek_hole": false, 00:07:31.718 "seek_data": false, 00:07:31.718 "copy": false, 00:07:31.718 "nvme_iov_md": false 00:07:31.718 }, 00:07:31.718 "memory_domains": [ 00:07:31.718 { 00:07:31.718 "dma_device_id": "system", 00:07:31.718 "dma_device_type": 1 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.718 "dma_device_type": 2 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "dma_device_id": "system", 00:07:31.718 "dma_device_type": 1 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.718 "dma_device_type": 2 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "dma_device_id": "system", 00:07:31.718 "dma_device_type": 1 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.718 "dma_device_type": 2 00:07:31.718 } 00:07:31.718 ], 00:07:31.718 "driver_specific": { 00:07:31.718 "raid": { 00:07:31.718 "uuid": "0523d5ed-2aab-411c-a116-add7b1405c8a", 00:07:31.718 "strip_size_kb": 64, 00:07:31.718 "state": "online", 00:07:31.718 "raid_level": "raid0", 00:07:31.718 "superblock": true, 00:07:31.718 "num_base_bdevs": 3, 00:07:31.718 "num_base_bdevs_discovered": 3, 00:07:31.718 "num_base_bdevs_operational": 3, 00:07:31.718 "base_bdevs_list": [ 00:07:31.718 { 00:07:31.718 "name": "pt1", 00:07:31.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.718 "is_configured": true, 00:07:31.718 "data_offset": 2048, 00:07:31.718 "data_size": 63488 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "name": "pt2", 00:07:31.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.718 "is_configured": true, 00:07:31.718 "data_offset": 2048, 00:07:31.718 "data_size": 63488 00:07:31.718 }, 00:07:31.718 { 00:07:31.718 "name": "pt3", 00:07:31.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:31.718 "is_configured": true, 00:07:31.718 "data_offset": 2048, 00:07:31.718 "data_size": 63488 00:07:31.718 } 00:07:31.718 ] 00:07:31.718 } 00:07:31.718 } 00:07:31.718 }' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.718 pt2 00:07:31.718 pt3' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.718 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:31.718 [2024-10-30 09:42:10.325081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0523d5ed-2aab-411c-a116-add7b1405c8a '!=' 0523d5ed-2aab-411c-a116-add7b1405c8a ']' 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63681 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63681 ']' 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63681 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63681 00:07:31.980 killing process with pid 63681 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63681' 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63681 00:07:31.980 [2024-10-30 09:42:10.381655] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.980 09:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63681 00:07:31.980 [2024-10-30 09:42:10.381938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.980 [2024-10-30 09:42:10.382016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.980 [2024-10-30 09:42:10.382094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.980 [2024-10-30 09:42:10.570389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.928 09:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:32.928 00:07:32.928 real 0m3.923s 00:07:32.928 user 0m5.657s 00:07:32.928 sys 0m0.588s 00:07:32.928 09:42:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.928 ************************************ 00:07:32.928 END TEST raid_superblock_test 00:07:32.928 09:42:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.928 ************************************ 00:07:32.928 09:42:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:32.928 09:42:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:32.928 09:42:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.928 09:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.928 ************************************ 00:07:32.928 START TEST raid_read_error_test 00:07:32.928 ************************************ 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:32.928 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dz2zKGAySI 00:07:32.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63923 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63923 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63923 ']' 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 09:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.929 [2024-10-30 09:42:11.429682] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:32.929 [2024-10-30 09:42:11.429800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63923 ] 00:07:33.190 [2024-10-30 09:42:11.587038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.190 [2024-10-30 09:42:11.690950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.452 [2024-10-30 09:42:11.828472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.452 [2024-10-30 09:42:11.828682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.713 BaseBdev1_malloc 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.713 true 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.713 [2024-10-30 09:42:12.309853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.713 [2024-10-30 09:42:12.309904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.713 [2024-10-30 09:42:12.309923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:33.713 [2024-10-30 09:42:12.309934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.713 [2024-10-30 09:42:12.312054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.713 [2024-10-30 09:42:12.312102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.713 BaseBdev1 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.713 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.973 BaseBdev2_malloc 00:07:33.973 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.973 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 true 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 [2024-10-30 09:42:12.353825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.974 [2024-10-30 09:42:12.353873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.974 [2024-10-30 09:42:12.353888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:33.974 [2024-10-30 09:42:12.353898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.974 [2024-10-30 09:42:12.355988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.974 [2024-10-30 09:42:12.356140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.974 BaseBdev2 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 BaseBdev3_malloc 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 true 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 [2024-10-30 09:42:12.410615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:33.974 [2024-10-30 09:42:12.410666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.974 [2024-10-30 09:42:12.410682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:33.974 [2024-10-30 09:42:12.410693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.974 [2024-10-30 09:42:12.412804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.974 [2024-10-30 09:42:12.412945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:33.974 BaseBdev3 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 [2024-10-30 09:42:12.418683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.974 [2024-10-30 09:42:12.420613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.974 [2024-10-30 09:42:12.420790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.974 [2024-10-30 09:42:12.421417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:33.974 [2024-10-30 09:42:12.421512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:33.974 [2024-10-30 09:42:12.421804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:33.974 [2024-10-30 09:42:12.422019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:33.974 [2024-10-30 09:42:12.422102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:33.974 [2024-10-30 09:42:12.422311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.974 "name": "raid_bdev1", 00:07:33.974 "uuid": "be2a5e85-29d0-4e93-9a1a-c9677b136c2c", 00:07:33.974 "strip_size_kb": 64, 00:07:33.974 "state": "online", 00:07:33.974 "raid_level": "raid0", 00:07:33.974 "superblock": true, 00:07:33.974 "num_base_bdevs": 3, 00:07:33.974 "num_base_bdevs_discovered": 3, 00:07:33.974 "num_base_bdevs_operational": 3, 00:07:33.974 "base_bdevs_list": [ 00:07:33.974 { 00:07:33.974 "name": "BaseBdev1", 00:07:33.974 "uuid": "f2cd0ee4-5206-5953-8e37-2d13b5996ce8", 00:07:33.974 "is_configured": true, 00:07:33.974 "data_offset": 2048, 00:07:33.974 "data_size": 63488 00:07:33.974 }, 00:07:33.974 { 00:07:33.974 "name": "BaseBdev2", 00:07:33.974 "uuid": "1215e863-935f-56e0-bfd5-2fae5160da45", 00:07:33.974 "is_configured": true, 00:07:33.974 "data_offset": 2048, 00:07:33.974 "data_size": 63488 00:07:33.974 }, 00:07:33.974 { 00:07:33.974 "name": "BaseBdev3", 00:07:33.974 "uuid": "c016d40a-ced3-5e00-823d-737e1382e8fd", 00:07:33.974 "is_configured": true, 00:07:33.974 "data_offset": 2048, 00:07:33.974 "data_size": 63488 00:07:33.974 } 00:07:33.974 ] 00:07:33.974 }' 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.974 09:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.235 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:34.235 09:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:34.235 [2024-10-30 09:42:12.823718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.178 "name": "raid_bdev1", 00:07:35.178 "uuid": "be2a5e85-29d0-4e93-9a1a-c9677b136c2c", 00:07:35.178 "strip_size_kb": 64, 00:07:35.178 "state": "online", 00:07:35.178 "raid_level": "raid0", 00:07:35.178 "superblock": true, 00:07:35.178 "num_base_bdevs": 3, 00:07:35.178 "num_base_bdevs_discovered": 3, 00:07:35.178 "num_base_bdevs_operational": 3, 00:07:35.178 "base_bdevs_list": [ 00:07:35.178 { 00:07:35.178 "name": "BaseBdev1", 00:07:35.178 "uuid": "f2cd0ee4-5206-5953-8e37-2d13b5996ce8", 00:07:35.178 "is_configured": true, 00:07:35.178 "data_offset": 2048, 00:07:35.178 "data_size": 63488 00:07:35.178 }, 00:07:35.178 { 00:07:35.178 "name": "BaseBdev2", 00:07:35.178 "uuid": "1215e863-935f-56e0-bfd5-2fae5160da45", 00:07:35.178 "is_configured": true, 00:07:35.178 "data_offset": 2048, 00:07:35.178 "data_size": 63488 00:07:35.178 }, 00:07:35.178 { 00:07:35.178 "name": "BaseBdev3", 00:07:35.178 "uuid": "c016d40a-ced3-5e00-823d-737e1382e8fd", 00:07:35.178 "is_configured": true, 00:07:35.178 "data_offset": 2048, 00:07:35.178 "data_size": 63488 00:07:35.178 } 00:07:35.178 ] 00:07:35.178 }' 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.178 09:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.459 [2024-10-30 09:42:14.053517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.459 [2024-10-30 09:42:14.053544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.459 { 00:07:35.459 "results": [ 00:07:35.459 { 00:07:35.459 "job": "raid_bdev1", 00:07:35.459 "core_mask": "0x1", 00:07:35.459 "workload": "randrw", 00:07:35.459 "percentage": 50, 00:07:35.459 "status": "finished", 00:07:35.459 "queue_depth": 1, 00:07:35.459 "io_size": 131072, 00:07:35.459 "runtime": 1.227816, 00:07:35.459 "iops": 15025.052613746686, 00:07:35.459 "mibps": 1878.1315767183357, 00:07:35.459 "io_failed": 1, 00:07:35.459 "io_timeout": 0, 00:07:35.459 "avg_latency_us": 90.96752611148405, 00:07:35.459 "min_latency_us": 33.28, 00:07:35.459 "max_latency_us": 1688.8123076923077 00:07:35.459 } 00:07:35.459 ], 00:07:35.459 "core_count": 1 00:07:35.459 } 00:07:35.459 [2024-10-30 09:42:14.056579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.459 [2024-10-30 09:42:14.056628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.459 [2024-10-30 09:42:14.056665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.459 [2024-10-30 09:42:14.056675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63923 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63923 ']' 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63923 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.459 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63923 00:07:35.761 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.761 killing process with pid 63923 00:07:35.761 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.761 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63923' 00:07:35.761 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63923 00:07:35.761 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63923 00:07:35.761 [2024-10-30 09:42:14.079493] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.761 [2024-10-30 09:42:14.220631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dz2zKGAySI 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:36.355 ************************************ 00:07:36.355 END TEST raid_read_error_test 00:07:36.355 ************************************ 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:07:36.355 00:07:36.355 real 0m3.605s 00:07:36.355 user 0m4.272s 00:07:36.355 sys 0m0.378s 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.355 09:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 09:42:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:36.617 09:42:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:36.617 09:42:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.617 09:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 ************************************ 00:07:36.617 START TEST raid_write_error_test 00:07:36.617 ************************************ 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tmgw8zmNu5 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64058 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64058 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 64058 ']' 00:07:36.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.617 09:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 [2024-10-30 09:42:15.102425] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:36.617 [2024-10-30 09:42:15.102543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64058 ] 00:07:36.879 [2024-10-30 09:42:15.254021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.879 [2024-10-30 09:42:15.354462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.879 [2024-10-30 09:42:15.490110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.879 [2024-10-30 09:42:15.490143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.451 BaseBdev1_malloc 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.451 true 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.451 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-30 09:42:16.070390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:37.714 [2024-10-30 09:42:16.070441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.714 [2024-10-30 09:42:16.070461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:37.714 [2024-10-30 09:42:16.070471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.714 [2024-10-30 09:42:16.072643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.714 [2024-10-30 09:42:16.072790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:37.714 BaseBdev1 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 BaseBdev2_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 true 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-30 09:42:16.114311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.714 [2024-10-30 09:42:16.114357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.714 [2024-10-30 09:42:16.114372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.714 [2024-10-30 09:42:16.114382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.714 [2024-10-30 09:42:16.116514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.714 [2024-10-30 09:42:16.116549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.714 BaseBdev2 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 BaseBdev3_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 true 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-30 09:42:16.165649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:37.714 [2024-10-30 09:42:16.165790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.714 [2024-10-30 09:42:16.165812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:37.714 [2024-10-30 09:42:16.165824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.714 [2024-10-30 09:42:16.167950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.714 [2024-10-30 09:42:16.167981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:37.714 BaseBdev3 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-30 09:42:16.173719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.714 [2024-10-30 09:42:16.175540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.714 [2024-10-30 09:42:16.175617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:37.714 [2024-10-30 09:42:16.175802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:37.714 [2024-10-30 09:42:16.175814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:37.714 [2024-10-30 09:42:16.176077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:37.714 [2024-10-30 09:42:16.176232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:37.714 [2024-10-30 09:42:16.176244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:37.714 [2024-10-30 09:42:16.176376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.714 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.715 "name": "raid_bdev1", 00:07:37.715 "uuid": "e5f0b0f2-4fcf-4fd2-8e71-389b451218d9", 00:07:37.715 "strip_size_kb": 64, 00:07:37.715 "state": "online", 00:07:37.715 "raid_level": "raid0", 00:07:37.715 "superblock": true, 00:07:37.715 "num_base_bdevs": 3, 00:07:37.715 "num_base_bdevs_discovered": 3, 00:07:37.715 "num_base_bdevs_operational": 3, 00:07:37.715 "base_bdevs_list": [ 00:07:37.715 { 00:07:37.715 "name": "BaseBdev1", 00:07:37.715 "uuid": "60d21220-6f53-5c31-a079-fc33b82ce7c2", 00:07:37.715 "is_configured": true, 00:07:37.715 "data_offset": 2048, 00:07:37.715 "data_size": 63488 00:07:37.715 }, 00:07:37.715 { 00:07:37.715 "name": "BaseBdev2", 00:07:37.715 "uuid": "3bba8ab1-18b4-5e32-ab3c-c104b55f0c62", 00:07:37.715 "is_configured": true, 00:07:37.715 "data_offset": 2048, 00:07:37.715 "data_size": 63488 00:07:37.715 }, 00:07:37.715 { 00:07:37.715 "name": "BaseBdev3", 00:07:37.715 "uuid": "4740f34d-909c-5645-a21f-0bc62987278b", 00:07:37.715 "is_configured": true, 00:07:37.715 "data_offset": 2048, 00:07:37.715 "data_size": 63488 00:07:37.715 } 00:07:37.715 ] 00:07:37.715 }' 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.715 09:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.977 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:37.977 09:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:37.977 [2024-10-30 09:42:16.594750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.919 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.179 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.179 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.179 "name": "raid_bdev1", 00:07:39.179 "uuid": "e5f0b0f2-4fcf-4fd2-8e71-389b451218d9", 00:07:39.179 "strip_size_kb": 64, 00:07:39.179 "state": "online", 00:07:39.179 "raid_level": "raid0", 00:07:39.179 "superblock": true, 00:07:39.179 "num_base_bdevs": 3, 00:07:39.179 "num_base_bdevs_discovered": 3, 00:07:39.179 "num_base_bdevs_operational": 3, 00:07:39.179 "base_bdevs_list": [ 00:07:39.179 { 00:07:39.179 "name": "BaseBdev1", 00:07:39.179 "uuid": "60d21220-6f53-5c31-a079-fc33b82ce7c2", 00:07:39.179 "is_configured": true, 00:07:39.179 "data_offset": 2048, 00:07:39.179 "data_size": 63488 00:07:39.179 }, 00:07:39.179 { 00:07:39.179 "name": "BaseBdev2", 00:07:39.179 "uuid": "3bba8ab1-18b4-5e32-ab3c-c104b55f0c62", 00:07:39.179 "is_configured": true, 00:07:39.179 "data_offset": 2048, 00:07:39.179 "data_size": 63488 00:07:39.179 }, 00:07:39.179 { 00:07:39.179 "name": "BaseBdev3", 00:07:39.179 "uuid": "4740f34d-909c-5645-a21f-0bc62987278b", 00:07:39.179 "is_configured": true, 00:07:39.179 "data_offset": 2048, 00:07:39.179 "data_size": 63488 00:07:39.179 } 00:07:39.179 ] 00:07:39.179 }' 00:07:39.179 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.179 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.440 [2024-10-30 09:42:17.832919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.440 [2024-10-30 09:42:17.832945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.440 [2024-10-30 09:42:17.836085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.440 [2024-10-30 09:42:17.836202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.440 [2024-10-30 09:42:17.836262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.440 [2024-10-30 09:42:17.836639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:39.440 { 00:07:39.440 "results": [ 00:07:39.440 { 00:07:39.440 "job": "raid_bdev1", 00:07:39.440 "core_mask": "0x1", 00:07:39.440 "workload": "randrw", 00:07:39.440 "percentage": 50, 00:07:39.440 "status": "finished", 00:07:39.440 "queue_depth": 1, 00:07:39.440 "io_size": 131072, 00:07:39.440 "runtime": 1.236295, 00:07:39.440 "iops": 14968.110362009067, 00:07:39.440 "mibps": 1871.0137952511334, 00:07:39.440 "io_failed": 1, 00:07:39.440 "io_timeout": 0, 00:07:39.440 "avg_latency_us": 91.24623531661248, 00:07:39.440 "min_latency_us": 19.593846153846155, 00:07:39.440 "max_latency_us": 1688.8123076923077 00:07:39.440 } 00:07:39.440 ], 00:07:39.440 "core_count": 1 00:07:39.440 } 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64058 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 64058 ']' 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 64058 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64058 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:39.440 killing process with pid 64058 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64058' 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 64058 00:07:39.440 [2024-10-30 09:42:17.870870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.440 09:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 64058 00:07:39.440 [2024-10-30 09:42:18.015521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tmgw8zmNu5 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:07:40.384 00:07:40.384 real 0m3.718s 00:07:40.384 user 0m4.490s 00:07:40.384 sys 0m0.385s 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.384 09:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.384 ************************************ 00:07:40.384 END TEST raid_write_error_test 00:07:40.384 ************************************ 00:07:40.384 09:42:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:40.384 09:42:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:40.384 09:42:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:40.384 09:42:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.384 09:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.384 ************************************ 00:07:40.384 START TEST raid_state_function_test 00:07:40.384 ************************************ 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64190 00:07:40.384 Process raid pid: 64190 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64190' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64190 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64190 ']' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.384 09:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.384 [2024-10-30 09:42:18.877801] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:40.384 [2024-10-30 09:42:18.877911] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.646 [2024-10-30 09:42:19.036975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.646 [2024-10-30 09:42:19.137793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.907 [2024-10-30 09:42:19.274008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.907 [2024-10-30 09:42:19.274049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.171 [2024-10-30 09:42:19.731507] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.171 [2024-10-30 09:42:19.731554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.171 [2024-10-30 09:42:19.731564] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.171 [2024-10-30 09:42:19.731574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.171 [2024-10-30 09:42:19.731580] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.171 [2024-10-30 09:42:19.731588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.171 "name": "Existed_Raid", 00:07:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.171 "strip_size_kb": 64, 00:07:41.171 "state": "configuring", 00:07:41.171 "raid_level": "concat", 00:07:41.171 "superblock": false, 00:07:41.171 "num_base_bdevs": 3, 00:07:41.171 "num_base_bdevs_discovered": 0, 00:07:41.171 "num_base_bdevs_operational": 3, 00:07:41.171 "base_bdevs_list": [ 00:07:41.171 { 00:07:41.171 "name": "BaseBdev1", 00:07:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.171 "is_configured": false, 00:07:41.171 "data_offset": 0, 00:07:41.171 "data_size": 0 00:07:41.171 }, 00:07:41.171 { 00:07:41.171 "name": "BaseBdev2", 00:07:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.171 "is_configured": false, 00:07:41.171 "data_offset": 0, 00:07:41.171 "data_size": 0 00:07:41.171 }, 00:07:41.171 { 00:07:41.171 "name": "BaseBdev3", 00:07:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.171 "is_configured": false, 00:07:41.171 "data_offset": 0, 00:07:41.171 "data_size": 0 00:07:41.171 } 00:07:41.171 ] 00:07:41.171 }' 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.171 09:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.744 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.744 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.744 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.744 [2024-10-30 09:42:20.071537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.745 [2024-10-30 09:42:20.071568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 [2024-10-30 09:42:20.079537] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.745 [2024-10-30 09:42:20.079576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.745 [2024-10-30 09:42:20.079584] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.745 [2024-10-30 09:42:20.079594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.745 [2024-10-30 09:42:20.079601] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.745 [2024-10-30 09:42:20.079610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 [2024-10-30 09:42:20.111850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.745 BaseBdev1 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 [ 00:07:41.745 { 00:07:41.745 "name": "BaseBdev1", 00:07:41.745 "aliases": [ 00:07:41.745 "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f" 00:07:41.745 ], 00:07:41.745 "product_name": "Malloc disk", 00:07:41.745 "block_size": 512, 00:07:41.745 "num_blocks": 65536, 00:07:41.745 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:41.745 "assigned_rate_limits": { 00:07:41.745 "rw_ios_per_sec": 0, 00:07:41.745 "rw_mbytes_per_sec": 0, 00:07:41.745 "r_mbytes_per_sec": 0, 00:07:41.745 "w_mbytes_per_sec": 0 00:07:41.745 }, 00:07:41.745 "claimed": true, 00:07:41.745 "claim_type": "exclusive_write", 00:07:41.745 "zoned": false, 00:07:41.745 "supported_io_types": { 00:07:41.745 "read": true, 00:07:41.745 "write": true, 00:07:41.745 "unmap": true, 00:07:41.745 "flush": true, 00:07:41.745 "reset": true, 00:07:41.745 "nvme_admin": false, 00:07:41.745 "nvme_io": false, 00:07:41.745 "nvme_io_md": false, 00:07:41.745 "write_zeroes": true, 00:07:41.745 "zcopy": true, 00:07:41.745 "get_zone_info": false, 00:07:41.745 "zone_management": false, 00:07:41.745 "zone_append": false, 00:07:41.745 "compare": false, 00:07:41.745 "compare_and_write": false, 00:07:41.745 "abort": true, 00:07:41.745 "seek_hole": false, 00:07:41.745 "seek_data": false, 00:07:41.745 "copy": true, 00:07:41.745 "nvme_iov_md": false 00:07:41.745 }, 00:07:41.745 "memory_domains": [ 00:07:41.745 { 00:07:41.745 "dma_device_id": "system", 00:07:41.745 "dma_device_type": 1 00:07:41.745 }, 00:07:41.745 { 00:07:41.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.745 "dma_device_type": 2 00:07:41.745 } 00:07:41.745 ], 00:07:41.745 "driver_specific": {} 00:07:41.745 } 00:07:41.745 ] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.745 "name": "Existed_Raid", 00:07:41.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.745 "strip_size_kb": 64, 00:07:41.745 "state": "configuring", 00:07:41.745 "raid_level": "concat", 00:07:41.745 "superblock": false, 00:07:41.745 "num_base_bdevs": 3, 00:07:41.745 "num_base_bdevs_discovered": 1, 00:07:41.745 "num_base_bdevs_operational": 3, 00:07:41.745 "base_bdevs_list": [ 00:07:41.745 { 00:07:41.745 "name": "BaseBdev1", 00:07:41.745 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:41.745 "is_configured": true, 00:07:41.745 "data_offset": 0, 00:07:41.745 "data_size": 65536 00:07:41.745 }, 00:07:41.745 { 00:07:41.745 "name": "BaseBdev2", 00:07:41.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.745 "is_configured": false, 00:07:41.745 "data_offset": 0, 00:07:41.745 "data_size": 0 00:07:41.745 }, 00:07:41.745 { 00:07:41.745 "name": "BaseBdev3", 00:07:41.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.745 "is_configured": false, 00:07:41.745 "data_offset": 0, 00:07:41.745 "data_size": 0 00:07:41.745 } 00:07:41.745 ] 00:07:41.745 }' 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.745 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.005 [2024-10-30 09:42:20.451963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.005 [2024-10-30 09:42:20.452008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.005 [2024-10-30 09:42:20.460008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.005 [2024-10-30 09:42:20.461856] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.005 [2024-10-30 09:42:20.461897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.005 [2024-10-30 09:42:20.461906] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:42.005 [2024-10-30 09:42:20.461915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.005 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.006 "name": "Existed_Raid", 00:07:42.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.006 "strip_size_kb": 64, 00:07:42.006 "state": "configuring", 00:07:42.006 "raid_level": "concat", 00:07:42.006 "superblock": false, 00:07:42.006 "num_base_bdevs": 3, 00:07:42.006 "num_base_bdevs_discovered": 1, 00:07:42.006 "num_base_bdevs_operational": 3, 00:07:42.006 "base_bdevs_list": [ 00:07:42.006 { 00:07:42.006 "name": "BaseBdev1", 00:07:42.006 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:42.006 "is_configured": true, 00:07:42.006 "data_offset": 0, 00:07:42.006 "data_size": 65536 00:07:42.006 }, 00:07:42.006 { 00:07:42.006 "name": "BaseBdev2", 00:07:42.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.006 "is_configured": false, 00:07:42.006 "data_offset": 0, 00:07:42.006 "data_size": 0 00:07:42.006 }, 00:07:42.006 { 00:07:42.006 "name": "BaseBdev3", 00:07:42.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.006 "is_configured": false, 00:07:42.006 "data_offset": 0, 00:07:42.006 "data_size": 0 00:07:42.006 } 00:07:42.006 ] 00:07:42.006 }' 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.006 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.266 [2024-10-30 09:42:20.818560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.266 BaseBdev2 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.266 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.266 [ 00:07:42.266 { 00:07:42.266 "name": "BaseBdev2", 00:07:42.266 "aliases": [ 00:07:42.266 "bfcccc44-131a-4a2e-8f95-dd25ea3898dd" 00:07:42.266 ], 00:07:42.266 "product_name": "Malloc disk", 00:07:42.266 "block_size": 512, 00:07:42.267 "num_blocks": 65536, 00:07:42.267 "uuid": "bfcccc44-131a-4a2e-8f95-dd25ea3898dd", 00:07:42.267 "assigned_rate_limits": { 00:07:42.267 "rw_ios_per_sec": 0, 00:07:42.267 "rw_mbytes_per_sec": 0, 00:07:42.267 "r_mbytes_per_sec": 0, 00:07:42.267 "w_mbytes_per_sec": 0 00:07:42.267 }, 00:07:42.267 "claimed": true, 00:07:42.267 "claim_type": "exclusive_write", 00:07:42.267 "zoned": false, 00:07:42.267 "supported_io_types": { 00:07:42.267 "read": true, 00:07:42.267 "write": true, 00:07:42.267 "unmap": true, 00:07:42.267 "flush": true, 00:07:42.267 "reset": true, 00:07:42.267 "nvme_admin": false, 00:07:42.267 "nvme_io": false, 00:07:42.267 "nvme_io_md": false, 00:07:42.267 "write_zeroes": true, 00:07:42.267 "zcopy": true, 00:07:42.267 "get_zone_info": false, 00:07:42.267 "zone_management": false, 00:07:42.267 "zone_append": false, 00:07:42.267 "compare": false, 00:07:42.267 "compare_and_write": false, 00:07:42.267 "abort": true, 00:07:42.267 "seek_hole": false, 00:07:42.267 "seek_data": false, 00:07:42.267 "copy": true, 00:07:42.267 "nvme_iov_md": false 00:07:42.267 }, 00:07:42.267 "memory_domains": [ 00:07:42.267 { 00:07:42.267 "dma_device_id": "system", 00:07:42.267 "dma_device_type": 1 00:07:42.267 }, 00:07:42.267 { 00:07:42.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.267 "dma_device_type": 2 00:07:42.267 } 00:07:42.267 ], 00:07:42.267 "driver_specific": {} 00:07:42.267 } 00:07:42.267 ] 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.267 "name": "Existed_Raid", 00:07:42.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.267 "strip_size_kb": 64, 00:07:42.267 "state": "configuring", 00:07:42.267 "raid_level": "concat", 00:07:42.267 "superblock": false, 00:07:42.267 "num_base_bdevs": 3, 00:07:42.267 "num_base_bdevs_discovered": 2, 00:07:42.267 "num_base_bdevs_operational": 3, 00:07:42.267 "base_bdevs_list": [ 00:07:42.267 { 00:07:42.267 "name": "BaseBdev1", 00:07:42.267 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:42.267 "is_configured": true, 00:07:42.267 "data_offset": 0, 00:07:42.267 "data_size": 65536 00:07:42.267 }, 00:07:42.267 { 00:07:42.267 "name": "BaseBdev2", 00:07:42.267 "uuid": "bfcccc44-131a-4a2e-8f95-dd25ea3898dd", 00:07:42.267 "is_configured": true, 00:07:42.267 "data_offset": 0, 00:07:42.267 "data_size": 65536 00:07:42.267 }, 00:07:42.267 { 00:07:42.267 "name": "BaseBdev3", 00:07:42.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.267 "is_configured": false, 00:07:42.267 "data_offset": 0, 00:07:42.267 "data_size": 0 00:07:42.267 } 00:07:42.267 ] 00:07:42.267 }' 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.267 09:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.560 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:42.560 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.560 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.823 [2024-10-30 09:42:21.194217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:42.823 [2024-10-30 09:42:21.194258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.823 [2024-10-30 09:42:21.194270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:42.823 [2024-10-30 09:42:21.194531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:42.823 [2024-10-30 09:42:21.194671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.823 [2024-10-30 09:42:21.194680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.823 [2024-10-30 09:42:21.194910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.823 BaseBdev3 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.823 [ 00:07:42.823 { 00:07:42.823 "name": "BaseBdev3", 00:07:42.823 "aliases": [ 00:07:42.823 "778ee316-1343-427e-99e6-d5e34bc1a641" 00:07:42.823 ], 00:07:42.823 "product_name": "Malloc disk", 00:07:42.823 "block_size": 512, 00:07:42.823 "num_blocks": 65536, 00:07:42.823 "uuid": "778ee316-1343-427e-99e6-d5e34bc1a641", 00:07:42.823 "assigned_rate_limits": { 00:07:42.823 "rw_ios_per_sec": 0, 00:07:42.823 "rw_mbytes_per_sec": 0, 00:07:42.823 "r_mbytes_per_sec": 0, 00:07:42.823 "w_mbytes_per_sec": 0 00:07:42.823 }, 00:07:42.823 "claimed": true, 00:07:42.823 "claim_type": "exclusive_write", 00:07:42.823 "zoned": false, 00:07:42.823 "supported_io_types": { 00:07:42.823 "read": true, 00:07:42.823 "write": true, 00:07:42.823 "unmap": true, 00:07:42.823 "flush": true, 00:07:42.823 "reset": true, 00:07:42.823 "nvme_admin": false, 00:07:42.823 "nvme_io": false, 00:07:42.823 "nvme_io_md": false, 00:07:42.823 "write_zeroes": true, 00:07:42.823 "zcopy": true, 00:07:42.823 "get_zone_info": false, 00:07:42.823 "zone_management": false, 00:07:42.823 "zone_append": false, 00:07:42.823 "compare": false, 00:07:42.823 "compare_and_write": false, 00:07:42.823 "abort": true, 00:07:42.823 "seek_hole": false, 00:07:42.823 "seek_data": false, 00:07:42.823 "copy": true, 00:07:42.823 "nvme_iov_md": false 00:07:42.823 }, 00:07:42.823 "memory_domains": [ 00:07:42.823 { 00:07:42.823 "dma_device_id": "system", 00:07:42.823 "dma_device_type": 1 00:07:42.823 }, 00:07:42.823 { 00:07:42.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.823 "dma_device_type": 2 00:07:42.823 } 00:07:42.823 ], 00:07:42.823 "driver_specific": {} 00:07:42.823 } 00:07:42.823 ] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.823 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.823 "name": "Existed_Raid", 00:07:42.823 "uuid": "842d4354-c733-449c-afd0-f1082725aec2", 00:07:42.823 "strip_size_kb": 64, 00:07:42.823 "state": "online", 00:07:42.823 "raid_level": "concat", 00:07:42.823 "superblock": false, 00:07:42.823 "num_base_bdevs": 3, 00:07:42.823 "num_base_bdevs_discovered": 3, 00:07:42.823 "num_base_bdevs_operational": 3, 00:07:42.823 "base_bdevs_list": [ 00:07:42.823 { 00:07:42.823 "name": "BaseBdev1", 00:07:42.823 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:42.823 "is_configured": true, 00:07:42.823 "data_offset": 0, 00:07:42.823 "data_size": 65536 00:07:42.823 }, 00:07:42.824 { 00:07:42.824 "name": "BaseBdev2", 00:07:42.824 "uuid": "bfcccc44-131a-4a2e-8f95-dd25ea3898dd", 00:07:42.824 "is_configured": true, 00:07:42.824 "data_offset": 0, 00:07:42.824 "data_size": 65536 00:07:42.824 }, 00:07:42.824 { 00:07:42.824 "name": "BaseBdev3", 00:07:42.824 "uuid": "778ee316-1343-427e-99e6-d5e34bc1a641", 00:07:42.824 "is_configured": true, 00:07:42.824 "data_offset": 0, 00:07:42.824 "data_size": 65536 00:07:42.824 } 00:07:42.824 ] 00:07:42.824 }' 00:07:42.824 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.824 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.085 [2024-10-30 09:42:21.542670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.085 "name": "Existed_Raid", 00:07:43.085 "aliases": [ 00:07:43.085 "842d4354-c733-449c-afd0-f1082725aec2" 00:07:43.085 ], 00:07:43.085 "product_name": "Raid Volume", 00:07:43.085 "block_size": 512, 00:07:43.085 "num_blocks": 196608, 00:07:43.085 "uuid": "842d4354-c733-449c-afd0-f1082725aec2", 00:07:43.085 "assigned_rate_limits": { 00:07:43.085 "rw_ios_per_sec": 0, 00:07:43.085 "rw_mbytes_per_sec": 0, 00:07:43.085 "r_mbytes_per_sec": 0, 00:07:43.085 "w_mbytes_per_sec": 0 00:07:43.085 }, 00:07:43.085 "claimed": false, 00:07:43.085 "zoned": false, 00:07:43.085 "supported_io_types": { 00:07:43.085 "read": true, 00:07:43.085 "write": true, 00:07:43.085 "unmap": true, 00:07:43.085 "flush": true, 00:07:43.085 "reset": true, 00:07:43.085 "nvme_admin": false, 00:07:43.085 "nvme_io": false, 00:07:43.085 "nvme_io_md": false, 00:07:43.085 "write_zeroes": true, 00:07:43.085 "zcopy": false, 00:07:43.085 "get_zone_info": false, 00:07:43.085 "zone_management": false, 00:07:43.085 "zone_append": false, 00:07:43.085 "compare": false, 00:07:43.085 "compare_and_write": false, 00:07:43.085 "abort": false, 00:07:43.085 "seek_hole": false, 00:07:43.085 "seek_data": false, 00:07:43.085 "copy": false, 00:07:43.085 "nvme_iov_md": false 00:07:43.085 }, 00:07:43.085 "memory_domains": [ 00:07:43.085 { 00:07:43.085 "dma_device_id": "system", 00:07:43.085 "dma_device_type": 1 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.085 "dma_device_type": 2 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "dma_device_id": "system", 00:07:43.085 "dma_device_type": 1 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.085 "dma_device_type": 2 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "dma_device_id": "system", 00:07:43.085 "dma_device_type": 1 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.085 "dma_device_type": 2 00:07:43.085 } 00:07:43.085 ], 00:07:43.085 "driver_specific": { 00:07:43.085 "raid": { 00:07:43.085 "uuid": "842d4354-c733-449c-afd0-f1082725aec2", 00:07:43.085 "strip_size_kb": 64, 00:07:43.085 "state": "online", 00:07:43.085 "raid_level": "concat", 00:07:43.085 "superblock": false, 00:07:43.085 "num_base_bdevs": 3, 00:07:43.085 "num_base_bdevs_discovered": 3, 00:07:43.085 "num_base_bdevs_operational": 3, 00:07:43.085 "base_bdevs_list": [ 00:07:43.085 { 00:07:43.085 "name": "BaseBdev1", 00:07:43.085 "uuid": "c5dd8abd-5bcd-4a32-91ba-d2610f87b75f", 00:07:43.085 "is_configured": true, 00:07:43.085 "data_offset": 0, 00:07:43.085 "data_size": 65536 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "name": "BaseBdev2", 00:07:43.085 "uuid": "bfcccc44-131a-4a2e-8f95-dd25ea3898dd", 00:07:43.085 "is_configured": true, 00:07:43.085 "data_offset": 0, 00:07:43.085 "data_size": 65536 00:07:43.085 }, 00:07:43.085 { 00:07:43.085 "name": "BaseBdev3", 00:07:43.085 "uuid": "778ee316-1343-427e-99e6-d5e34bc1a641", 00:07:43.085 "is_configured": true, 00:07:43.085 "data_offset": 0, 00:07:43.085 "data_size": 65536 00:07:43.085 } 00:07:43.085 ] 00:07:43.085 } 00:07:43.085 } 00:07:43.085 }' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.085 BaseBdev2 00:07:43.085 BaseBdev3' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.085 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.086 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:43.086 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.086 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 [2024-10-30 09:42:21.730419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.347 [2024-10-30 09:42:21.730442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.347 [2024-10-30 09:42:21.730489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.347 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.347 "name": "Existed_Raid", 00:07:43.347 "uuid": "842d4354-c733-449c-afd0-f1082725aec2", 00:07:43.347 "strip_size_kb": 64, 00:07:43.347 "state": "offline", 00:07:43.347 "raid_level": "concat", 00:07:43.347 "superblock": false, 00:07:43.347 "num_base_bdevs": 3, 00:07:43.347 "num_base_bdevs_discovered": 2, 00:07:43.347 "num_base_bdevs_operational": 2, 00:07:43.347 "base_bdevs_list": [ 00:07:43.347 { 00:07:43.347 "name": null, 00:07:43.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.347 "is_configured": false, 00:07:43.347 "data_offset": 0, 00:07:43.348 "data_size": 65536 00:07:43.348 }, 00:07:43.348 { 00:07:43.348 "name": "BaseBdev2", 00:07:43.348 "uuid": "bfcccc44-131a-4a2e-8f95-dd25ea3898dd", 00:07:43.348 "is_configured": true, 00:07:43.348 "data_offset": 0, 00:07:43.348 "data_size": 65536 00:07:43.348 }, 00:07:43.348 { 00:07:43.348 "name": "BaseBdev3", 00:07:43.348 "uuid": "778ee316-1343-427e-99e6-d5e34bc1a641", 00:07:43.348 "is_configured": true, 00:07:43.348 "data_offset": 0, 00:07:43.348 "data_size": 65536 00:07:43.348 } 00:07:43.348 ] 00:07:43.348 }' 00:07:43.348 09:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.348 09:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.609 [2024-10-30 09:42:22.128255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.609 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.609 [2024-10-30 09:42:22.222250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:43.609 [2024-10-30 09:42:22.222290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.870 BaseBdev2 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.870 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.870 [ 00:07:43.870 { 00:07:43.870 "name": "BaseBdev2", 00:07:43.870 "aliases": [ 00:07:43.870 "037093fb-af3b-4f0f-b8e3-64fd47a2326e" 00:07:43.870 ], 00:07:43.870 "product_name": "Malloc disk", 00:07:43.870 "block_size": 512, 00:07:43.870 "num_blocks": 65536, 00:07:43.870 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:43.870 "assigned_rate_limits": { 00:07:43.870 "rw_ios_per_sec": 0, 00:07:43.870 "rw_mbytes_per_sec": 0, 00:07:43.870 "r_mbytes_per_sec": 0, 00:07:43.870 "w_mbytes_per_sec": 0 00:07:43.870 }, 00:07:43.870 "claimed": false, 00:07:43.870 "zoned": false, 00:07:43.870 "supported_io_types": { 00:07:43.870 "read": true, 00:07:43.870 "write": true, 00:07:43.870 "unmap": true, 00:07:43.870 "flush": true, 00:07:43.870 "reset": true, 00:07:43.870 "nvme_admin": false, 00:07:43.870 "nvme_io": false, 00:07:43.870 "nvme_io_md": false, 00:07:43.870 "write_zeroes": true, 00:07:43.870 "zcopy": true, 00:07:43.870 "get_zone_info": false, 00:07:43.871 "zone_management": false, 00:07:43.871 "zone_append": false, 00:07:43.871 "compare": false, 00:07:43.871 "compare_and_write": false, 00:07:43.871 "abort": true, 00:07:43.871 "seek_hole": false, 00:07:43.871 "seek_data": false, 00:07:43.871 "copy": true, 00:07:43.871 "nvme_iov_md": false 00:07:43.871 }, 00:07:43.871 "memory_domains": [ 00:07:43.871 { 00:07:43.871 "dma_device_id": "system", 00:07:43.871 "dma_device_type": 1 00:07:43.871 }, 00:07:43.871 { 00:07:43.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.871 "dma_device_type": 2 00:07:43.871 } 00:07:43.871 ], 00:07:43.871 "driver_specific": {} 00:07:43.871 } 00:07:43.871 ] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 BaseBdev3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 [ 00:07:43.871 { 00:07:43.871 "name": "BaseBdev3", 00:07:43.871 "aliases": [ 00:07:43.871 "c04c45ad-1272-4b16-825e-fa134c65bd2f" 00:07:43.871 ], 00:07:43.871 "product_name": "Malloc disk", 00:07:43.871 "block_size": 512, 00:07:43.871 "num_blocks": 65536, 00:07:43.871 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:43.871 "assigned_rate_limits": { 00:07:43.871 "rw_ios_per_sec": 0, 00:07:43.871 "rw_mbytes_per_sec": 0, 00:07:43.871 "r_mbytes_per_sec": 0, 00:07:43.871 "w_mbytes_per_sec": 0 00:07:43.871 }, 00:07:43.871 "claimed": false, 00:07:43.871 "zoned": false, 00:07:43.871 "supported_io_types": { 00:07:43.871 "read": true, 00:07:43.871 "write": true, 00:07:43.871 "unmap": true, 00:07:43.871 "flush": true, 00:07:43.871 "reset": true, 00:07:43.871 "nvme_admin": false, 00:07:43.871 "nvme_io": false, 00:07:43.871 "nvme_io_md": false, 00:07:43.871 "write_zeroes": true, 00:07:43.871 "zcopy": true, 00:07:43.871 "get_zone_info": false, 00:07:43.871 "zone_management": false, 00:07:43.871 "zone_append": false, 00:07:43.871 "compare": false, 00:07:43.871 "compare_and_write": false, 00:07:43.871 "abort": true, 00:07:43.871 "seek_hole": false, 00:07:43.871 "seek_data": false, 00:07:43.871 "copy": true, 00:07:43.871 "nvme_iov_md": false 00:07:43.871 }, 00:07:43.871 "memory_domains": [ 00:07:43.871 { 00:07:43.871 "dma_device_id": "system", 00:07:43.871 "dma_device_type": 1 00:07:43.871 }, 00:07:43.871 { 00:07:43.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.871 "dma_device_type": 2 00:07:43.871 } 00:07:43.871 ], 00:07:43.871 "driver_specific": {} 00:07:43.871 } 00:07:43.871 ] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 [2024-10-30 09:42:22.433134] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.871 [2024-10-30 09:42:22.433268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.871 [2024-10-30 09:42:22.433338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.871 [2024-10-30 09:42:22.435185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.871 "name": "Existed_Raid", 00:07:43.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.871 "strip_size_kb": 64, 00:07:43.871 "state": "configuring", 00:07:43.871 "raid_level": "concat", 00:07:43.871 "superblock": false, 00:07:43.871 "num_base_bdevs": 3, 00:07:43.871 "num_base_bdevs_discovered": 2, 00:07:43.871 "num_base_bdevs_operational": 3, 00:07:43.871 "base_bdevs_list": [ 00:07:43.871 { 00:07:43.871 "name": "BaseBdev1", 00:07:43.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.871 "is_configured": false, 00:07:43.871 "data_offset": 0, 00:07:43.871 "data_size": 0 00:07:43.871 }, 00:07:43.871 { 00:07:43.871 "name": "BaseBdev2", 00:07:43.871 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:43.871 "is_configured": true, 00:07:43.871 "data_offset": 0, 00:07:43.871 "data_size": 65536 00:07:43.871 }, 00:07:43.871 { 00:07:43.871 "name": "BaseBdev3", 00:07:43.871 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:43.871 "is_configured": true, 00:07:43.871 "data_offset": 0, 00:07:43.871 "data_size": 65536 00:07:43.871 } 00:07:43.871 ] 00:07:43.871 }' 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.871 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:44.132 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.392 [2024-10-30 09:42:22.749199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.392 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.393 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.393 "name": "Existed_Raid", 00:07:44.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.393 "strip_size_kb": 64, 00:07:44.393 "state": "configuring", 00:07:44.393 "raid_level": "concat", 00:07:44.393 "superblock": false, 00:07:44.393 "num_base_bdevs": 3, 00:07:44.393 "num_base_bdevs_discovered": 1, 00:07:44.393 "num_base_bdevs_operational": 3, 00:07:44.393 "base_bdevs_list": [ 00:07:44.393 { 00:07:44.393 "name": "BaseBdev1", 00:07:44.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.393 "is_configured": false, 00:07:44.393 "data_offset": 0, 00:07:44.393 "data_size": 0 00:07:44.393 }, 00:07:44.393 { 00:07:44.393 "name": null, 00:07:44.393 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:44.393 "is_configured": false, 00:07:44.393 "data_offset": 0, 00:07:44.393 "data_size": 65536 00:07:44.393 }, 00:07:44.393 { 00:07:44.393 "name": "BaseBdev3", 00:07:44.393 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:44.393 "is_configured": true, 00:07:44.393 "data_offset": 0, 00:07:44.393 "data_size": 65536 00:07:44.393 } 00:07:44.393 ] 00:07:44.393 }' 00:07:44.393 09:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.393 09:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.653 [2024-10-30 09:42:23.155390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.653 BaseBdev1 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.653 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.653 [ 00:07:44.653 { 00:07:44.653 "name": "BaseBdev1", 00:07:44.653 "aliases": [ 00:07:44.653 "baccc2ed-e9a7-4819-8acd-e8d601f817c5" 00:07:44.653 ], 00:07:44.653 "product_name": "Malloc disk", 00:07:44.653 "block_size": 512, 00:07:44.653 "num_blocks": 65536, 00:07:44.653 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:44.653 "assigned_rate_limits": { 00:07:44.653 "rw_ios_per_sec": 0, 00:07:44.653 "rw_mbytes_per_sec": 0, 00:07:44.653 "r_mbytes_per_sec": 0, 00:07:44.653 "w_mbytes_per_sec": 0 00:07:44.653 }, 00:07:44.653 "claimed": true, 00:07:44.654 "claim_type": "exclusive_write", 00:07:44.654 "zoned": false, 00:07:44.654 "supported_io_types": { 00:07:44.654 "read": true, 00:07:44.654 "write": true, 00:07:44.654 "unmap": true, 00:07:44.654 "flush": true, 00:07:44.654 "reset": true, 00:07:44.654 "nvme_admin": false, 00:07:44.654 "nvme_io": false, 00:07:44.654 "nvme_io_md": false, 00:07:44.654 "write_zeroes": true, 00:07:44.654 "zcopy": true, 00:07:44.654 "get_zone_info": false, 00:07:44.654 "zone_management": false, 00:07:44.654 "zone_append": false, 00:07:44.654 "compare": false, 00:07:44.654 "compare_and_write": false, 00:07:44.654 "abort": true, 00:07:44.654 "seek_hole": false, 00:07:44.654 "seek_data": false, 00:07:44.654 "copy": true, 00:07:44.654 "nvme_iov_md": false 00:07:44.654 }, 00:07:44.654 "memory_domains": [ 00:07:44.654 { 00:07:44.654 "dma_device_id": "system", 00:07:44.654 "dma_device_type": 1 00:07:44.654 }, 00:07:44.654 { 00:07:44.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.654 "dma_device_type": 2 00:07:44.654 } 00:07:44.654 ], 00:07:44.654 "driver_specific": {} 00:07:44.654 } 00:07:44.654 ] 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.654 "name": "Existed_Raid", 00:07:44.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.654 "strip_size_kb": 64, 00:07:44.654 "state": "configuring", 00:07:44.654 "raid_level": "concat", 00:07:44.654 "superblock": false, 00:07:44.654 "num_base_bdevs": 3, 00:07:44.654 "num_base_bdevs_discovered": 2, 00:07:44.654 "num_base_bdevs_operational": 3, 00:07:44.654 "base_bdevs_list": [ 00:07:44.654 { 00:07:44.654 "name": "BaseBdev1", 00:07:44.654 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:44.654 "is_configured": true, 00:07:44.654 "data_offset": 0, 00:07:44.654 "data_size": 65536 00:07:44.654 }, 00:07:44.654 { 00:07:44.654 "name": null, 00:07:44.654 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:44.654 "is_configured": false, 00:07:44.654 "data_offset": 0, 00:07:44.654 "data_size": 65536 00:07:44.654 }, 00:07:44.654 { 00:07:44.654 "name": "BaseBdev3", 00:07:44.654 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:44.654 "is_configured": true, 00:07:44.654 "data_offset": 0, 00:07:44.654 "data_size": 65536 00:07:44.654 } 00:07:44.654 ] 00:07:44.654 }' 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.654 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.915 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.174 [2024-10-30 09:42:23.539533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.174 "name": "Existed_Raid", 00:07:45.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.174 "strip_size_kb": 64, 00:07:45.174 "state": "configuring", 00:07:45.174 "raid_level": "concat", 00:07:45.174 "superblock": false, 00:07:45.174 "num_base_bdevs": 3, 00:07:45.174 "num_base_bdevs_discovered": 1, 00:07:45.174 "num_base_bdevs_operational": 3, 00:07:45.174 "base_bdevs_list": [ 00:07:45.174 { 00:07:45.174 "name": "BaseBdev1", 00:07:45.174 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:45.174 "is_configured": true, 00:07:45.174 "data_offset": 0, 00:07:45.174 "data_size": 65536 00:07:45.174 }, 00:07:45.174 { 00:07:45.174 "name": null, 00:07:45.174 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:45.174 "is_configured": false, 00:07:45.174 "data_offset": 0, 00:07:45.174 "data_size": 65536 00:07:45.174 }, 00:07:45.174 { 00:07:45.174 "name": null, 00:07:45.174 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:45.174 "is_configured": false, 00:07:45.174 "data_offset": 0, 00:07:45.174 "data_size": 65536 00:07:45.174 } 00:07:45.174 ] 00:07:45.174 }' 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.174 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 [2024-10-30 09:42:23.887657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.436 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.436 "name": "Existed_Raid", 00:07:45.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.436 "strip_size_kb": 64, 00:07:45.436 "state": "configuring", 00:07:45.436 "raid_level": "concat", 00:07:45.436 "superblock": false, 00:07:45.436 "num_base_bdevs": 3, 00:07:45.436 "num_base_bdevs_discovered": 2, 00:07:45.436 "num_base_bdevs_operational": 3, 00:07:45.436 "base_bdevs_list": [ 00:07:45.436 { 00:07:45.436 "name": "BaseBdev1", 00:07:45.436 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:45.437 "is_configured": true, 00:07:45.437 "data_offset": 0, 00:07:45.437 "data_size": 65536 00:07:45.437 }, 00:07:45.437 { 00:07:45.437 "name": null, 00:07:45.437 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:45.437 "is_configured": false, 00:07:45.437 "data_offset": 0, 00:07:45.437 "data_size": 65536 00:07:45.437 }, 00:07:45.437 { 00:07:45.437 "name": "BaseBdev3", 00:07:45.437 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:45.437 "is_configured": true, 00:07:45.437 "data_offset": 0, 00:07:45.437 "data_size": 65536 00:07:45.437 } 00:07:45.437 ] 00:07:45.437 }' 00:07:45.437 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.437 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 [2024-10-30 09:42:24.255745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.697 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.959 "name": "Existed_Raid", 00:07:45.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.959 "strip_size_kb": 64, 00:07:45.959 "state": "configuring", 00:07:45.959 "raid_level": "concat", 00:07:45.959 "superblock": false, 00:07:45.959 "num_base_bdevs": 3, 00:07:45.959 "num_base_bdevs_discovered": 1, 00:07:45.959 "num_base_bdevs_operational": 3, 00:07:45.959 "base_bdevs_list": [ 00:07:45.959 { 00:07:45.959 "name": null, 00:07:45.959 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:45.959 "is_configured": false, 00:07:45.959 "data_offset": 0, 00:07:45.959 "data_size": 65536 00:07:45.959 }, 00:07:45.959 { 00:07:45.959 "name": null, 00:07:45.959 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:45.959 "is_configured": false, 00:07:45.959 "data_offset": 0, 00:07:45.959 "data_size": 65536 00:07:45.959 }, 00:07:45.959 { 00:07:45.959 "name": "BaseBdev3", 00:07:45.959 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:45.959 "is_configured": true, 00:07:45.959 "data_offset": 0, 00:07:45.959 "data_size": 65536 00:07:45.959 } 00:07:45.959 ] 00:07:45.959 }' 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.959 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 [2024-10-30 09:42:24.673519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.221 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.221 "name": "Existed_Raid", 00:07:46.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.222 "strip_size_kb": 64, 00:07:46.222 "state": "configuring", 00:07:46.222 "raid_level": "concat", 00:07:46.222 "superblock": false, 00:07:46.222 "num_base_bdevs": 3, 00:07:46.222 "num_base_bdevs_discovered": 2, 00:07:46.222 "num_base_bdevs_operational": 3, 00:07:46.222 "base_bdevs_list": [ 00:07:46.222 { 00:07:46.222 "name": null, 00:07:46.222 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:46.222 "is_configured": false, 00:07:46.222 "data_offset": 0, 00:07:46.222 "data_size": 65536 00:07:46.222 }, 00:07:46.222 { 00:07:46.222 "name": "BaseBdev2", 00:07:46.222 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:46.222 "is_configured": true, 00:07:46.222 "data_offset": 0, 00:07:46.222 "data_size": 65536 00:07:46.222 }, 00:07:46.222 { 00:07:46.222 "name": "BaseBdev3", 00:07:46.222 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:46.222 "is_configured": true, 00:07:46.222 "data_offset": 0, 00:07:46.222 "data_size": 65536 00:07:46.222 } 00:07:46.222 ] 00:07:46.222 }' 00:07:46.222 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.222 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.484 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.484 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:46.484 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u baccc2ed-e9a7-4819-8acd-e8d601f817c5 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.484 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 [2024-10-30 09:42:25.087986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:46.484 [2024-10-30 09:42:25.088031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:46.484 [2024-10-30 09:42:25.088040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:46.484 [2024-10-30 09:42:25.088304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:46.485 [2024-10-30 09:42:25.088434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:46.485 [2024-10-30 09:42:25.088443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:46.485 [2024-10-30 09:42:25.088684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.485 NewBaseBdev 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.746 [ 00:07:46.746 { 00:07:46.746 "name": "NewBaseBdev", 00:07:46.746 "aliases": [ 00:07:46.746 "baccc2ed-e9a7-4819-8acd-e8d601f817c5" 00:07:46.746 ], 00:07:46.746 "product_name": "Malloc disk", 00:07:46.746 "block_size": 512, 00:07:46.746 "num_blocks": 65536, 00:07:46.746 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:46.746 "assigned_rate_limits": { 00:07:46.746 "rw_ios_per_sec": 0, 00:07:46.746 "rw_mbytes_per_sec": 0, 00:07:46.746 "r_mbytes_per_sec": 0, 00:07:46.746 "w_mbytes_per_sec": 0 00:07:46.746 }, 00:07:46.746 "claimed": true, 00:07:46.746 "claim_type": "exclusive_write", 00:07:46.746 "zoned": false, 00:07:46.746 "supported_io_types": { 00:07:46.746 "read": true, 00:07:46.746 "write": true, 00:07:46.746 "unmap": true, 00:07:46.746 "flush": true, 00:07:46.746 "reset": true, 00:07:46.746 "nvme_admin": false, 00:07:46.746 "nvme_io": false, 00:07:46.746 "nvme_io_md": false, 00:07:46.746 "write_zeroes": true, 00:07:46.746 "zcopy": true, 00:07:46.746 "get_zone_info": false, 00:07:46.746 "zone_management": false, 00:07:46.746 "zone_append": false, 00:07:46.746 "compare": false, 00:07:46.746 "compare_and_write": false, 00:07:46.746 "abort": true, 00:07:46.746 "seek_hole": false, 00:07:46.746 "seek_data": false, 00:07:46.746 "copy": true, 00:07:46.746 "nvme_iov_md": false 00:07:46.746 }, 00:07:46.746 "memory_domains": [ 00:07:46.746 { 00:07:46.746 "dma_device_id": "system", 00:07:46.746 "dma_device_type": 1 00:07:46.746 }, 00:07:46.746 { 00:07:46.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.746 "dma_device_type": 2 00:07:46.746 } 00:07:46.746 ], 00:07:46.746 "driver_specific": {} 00:07:46.746 } 00:07:46.746 ] 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.746 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.746 "name": "Existed_Raid", 00:07:46.747 "uuid": "1dc26cf0-bcd4-4797-81a8-297ff77b40cb", 00:07:46.747 "strip_size_kb": 64, 00:07:46.747 "state": "online", 00:07:46.747 "raid_level": "concat", 00:07:46.747 "superblock": false, 00:07:46.747 "num_base_bdevs": 3, 00:07:46.747 "num_base_bdevs_discovered": 3, 00:07:46.747 "num_base_bdevs_operational": 3, 00:07:46.747 "base_bdevs_list": [ 00:07:46.747 { 00:07:46.747 "name": "NewBaseBdev", 00:07:46.747 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:46.747 "is_configured": true, 00:07:46.747 "data_offset": 0, 00:07:46.747 "data_size": 65536 00:07:46.747 }, 00:07:46.747 { 00:07:46.747 "name": "BaseBdev2", 00:07:46.747 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:46.747 "is_configured": true, 00:07:46.747 "data_offset": 0, 00:07:46.747 "data_size": 65536 00:07:46.747 }, 00:07:46.747 { 00:07:46.747 "name": "BaseBdev3", 00:07:46.747 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:46.747 "is_configured": true, 00:07:46.747 "data_offset": 0, 00:07:46.747 "data_size": 65536 00:07:46.747 } 00:07:46.747 ] 00:07:46.747 }' 00:07:46.747 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.747 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.009 [2024-10-30 09:42:25.456437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.009 "name": "Existed_Raid", 00:07:47.009 "aliases": [ 00:07:47.009 "1dc26cf0-bcd4-4797-81a8-297ff77b40cb" 00:07:47.009 ], 00:07:47.009 "product_name": "Raid Volume", 00:07:47.009 "block_size": 512, 00:07:47.009 "num_blocks": 196608, 00:07:47.009 "uuid": "1dc26cf0-bcd4-4797-81a8-297ff77b40cb", 00:07:47.009 "assigned_rate_limits": { 00:07:47.009 "rw_ios_per_sec": 0, 00:07:47.009 "rw_mbytes_per_sec": 0, 00:07:47.009 "r_mbytes_per_sec": 0, 00:07:47.009 "w_mbytes_per_sec": 0 00:07:47.009 }, 00:07:47.009 "claimed": false, 00:07:47.009 "zoned": false, 00:07:47.009 "supported_io_types": { 00:07:47.009 "read": true, 00:07:47.009 "write": true, 00:07:47.009 "unmap": true, 00:07:47.009 "flush": true, 00:07:47.009 "reset": true, 00:07:47.009 "nvme_admin": false, 00:07:47.009 "nvme_io": false, 00:07:47.009 "nvme_io_md": false, 00:07:47.009 "write_zeroes": true, 00:07:47.009 "zcopy": false, 00:07:47.009 "get_zone_info": false, 00:07:47.009 "zone_management": false, 00:07:47.009 "zone_append": false, 00:07:47.009 "compare": false, 00:07:47.009 "compare_and_write": false, 00:07:47.009 "abort": false, 00:07:47.009 "seek_hole": false, 00:07:47.009 "seek_data": false, 00:07:47.009 "copy": false, 00:07:47.009 "nvme_iov_md": false 00:07:47.009 }, 00:07:47.009 "memory_domains": [ 00:07:47.009 { 00:07:47.009 "dma_device_id": "system", 00:07:47.009 "dma_device_type": 1 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.009 "dma_device_type": 2 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "dma_device_id": "system", 00:07:47.009 "dma_device_type": 1 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.009 "dma_device_type": 2 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "dma_device_id": "system", 00:07:47.009 "dma_device_type": 1 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.009 "dma_device_type": 2 00:07:47.009 } 00:07:47.009 ], 00:07:47.009 "driver_specific": { 00:07:47.009 "raid": { 00:07:47.009 "uuid": "1dc26cf0-bcd4-4797-81a8-297ff77b40cb", 00:07:47.009 "strip_size_kb": 64, 00:07:47.009 "state": "online", 00:07:47.009 "raid_level": "concat", 00:07:47.009 "superblock": false, 00:07:47.009 "num_base_bdevs": 3, 00:07:47.009 "num_base_bdevs_discovered": 3, 00:07:47.009 "num_base_bdevs_operational": 3, 00:07:47.009 "base_bdevs_list": [ 00:07:47.009 { 00:07:47.009 "name": "NewBaseBdev", 00:07:47.009 "uuid": "baccc2ed-e9a7-4819-8acd-e8d601f817c5", 00:07:47.009 "is_configured": true, 00:07:47.009 "data_offset": 0, 00:07:47.009 "data_size": 65536 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "name": "BaseBdev2", 00:07:47.009 "uuid": "037093fb-af3b-4f0f-b8e3-64fd47a2326e", 00:07:47.009 "is_configured": true, 00:07:47.009 "data_offset": 0, 00:07:47.009 "data_size": 65536 00:07:47.009 }, 00:07:47.009 { 00:07:47.009 "name": "BaseBdev3", 00:07:47.009 "uuid": "c04c45ad-1272-4b16-825e-fa134c65bd2f", 00:07:47.009 "is_configured": true, 00:07:47.009 "data_offset": 0, 00:07:47.009 "data_size": 65536 00:07:47.009 } 00:07:47.009 ] 00:07:47.009 } 00:07:47.009 } 00:07:47.009 }' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:47.009 BaseBdev2 00:07:47.009 BaseBdev3' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.009 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.270 [2024-10-30 09:42:25.644159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.270 [2024-10-30 09:42:25.644178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.270 [2024-10-30 09:42:25.644246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.270 [2024-10-30 09:42:25.644301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.270 [2024-10-30 09:42:25.644313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64190 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64190 ']' 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64190 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64190 00:07:47.270 killing process with pid 64190 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64190' 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64190 00:07:47.270 [2024-10-30 09:42:25.676336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.270 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64190 00:07:47.270 [2024-10-30 09:42:25.864075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.302 00:07:48.302 real 0m7.760s 00:07:48.302 user 0m12.387s 00:07:48.302 sys 0m1.222s 00:07:48.302 ************************************ 00:07:48.302 END TEST raid_state_function_test 00:07:48.302 ************************************ 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.302 09:42:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:07:48.302 09:42:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:48.302 09:42:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.302 09:42:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.302 ************************************ 00:07:48.302 START TEST raid_state_function_test_sb 00:07:48.302 ************************************ 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.302 Process raid pid: 64784 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64784 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64784' 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64784 00:07:48.302 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64784 ']' 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.303 09:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.303 [2024-10-30 09:42:26.706802] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:48.303 [2024-10-30 09:42:26.706919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.303 [2024-10-30 09:42:26.868464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.564 [2024-10-30 09:42:26.980268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.564 [2024-10-30 09:42:27.116709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.564 [2024-10-30 09:42:27.116739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.137 [2024-10-30 09:42:27.558015] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.137 [2024-10-30 09:42:27.558071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.137 [2024-10-30 09:42:27.558081] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.137 [2024-10-30 09:42:27.558090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.137 [2024-10-30 09:42:27.558097] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.137 [2024-10-30 09:42:27.558105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.137 "name": "Existed_Raid", 00:07:49.137 "uuid": "b7ad9a8e-14d2-4faa-bc29-515293e4b62e", 00:07:49.137 "strip_size_kb": 64, 00:07:49.137 "state": "configuring", 00:07:49.137 "raid_level": "concat", 00:07:49.137 "superblock": true, 00:07:49.137 "num_base_bdevs": 3, 00:07:49.137 "num_base_bdevs_discovered": 0, 00:07:49.137 "num_base_bdevs_operational": 3, 00:07:49.137 "base_bdevs_list": [ 00:07:49.137 { 00:07:49.137 "name": "BaseBdev1", 00:07:49.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.137 "is_configured": false, 00:07:49.137 "data_offset": 0, 00:07:49.137 "data_size": 0 00:07:49.137 }, 00:07:49.137 { 00:07:49.137 "name": "BaseBdev2", 00:07:49.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.137 "is_configured": false, 00:07:49.137 "data_offset": 0, 00:07:49.137 "data_size": 0 00:07:49.137 }, 00:07:49.137 { 00:07:49.137 "name": "BaseBdev3", 00:07:49.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.137 "is_configured": false, 00:07:49.137 "data_offset": 0, 00:07:49.137 "data_size": 0 00:07:49.137 } 00:07:49.137 ] 00:07:49.137 }' 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.137 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.400 [2024-10-30 09:42:27.874180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.400 [2024-10-30 09:42:27.874212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.400 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.400 [2024-10-30 09:42:27.882041] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.400 [2024-10-30 09:42:27.882090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.400 [2024-10-30 09:42:27.882099] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.400 [2024-10-30 09:42:27.882109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.400 [2024-10-30 09:42:27.882116] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.401 [2024-10-30 09:42:27.882125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.401 [2024-10-30 09:42:27.914313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.401 BaseBdev1 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.401 [ 00:07:49.401 { 00:07:49.401 "name": "BaseBdev1", 00:07:49.401 "aliases": [ 00:07:49.401 "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca" 00:07:49.401 ], 00:07:49.401 "product_name": "Malloc disk", 00:07:49.401 "block_size": 512, 00:07:49.401 "num_blocks": 65536, 00:07:49.401 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:49.401 "assigned_rate_limits": { 00:07:49.401 "rw_ios_per_sec": 0, 00:07:49.401 "rw_mbytes_per_sec": 0, 00:07:49.401 "r_mbytes_per_sec": 0, 00:07:49.401 "w_mbytes_per_sec": 0 00:07:49.401 }, 00:07:49.401 "claimed": true, 00:07:49.401 "claim_type": "exclusive_write", 00:07:49.401 "zoned": false, 00:07:49.401 "supported_io_types": { 00:07:49.401 "read": true, 00:07:49.401 "write": true, 00:07:49.401 "unmap": true, 00:07:49.401 "flush": true, 00:07:49.401 "reset": true, 00:07:49.401 "nvme_admin": false, 00:07:49.401 "nvme_io": false, 00:07:49.401 "nvme_io_md": false, 00:07:49.401 "write_zeroes": true, 00:07:49.401 "zcopy": true, 00:07:49.401 "get_zone_info": false, 00:07:49.401 "zone_management": false, 00:07:49.401 "zone_append": false, 00:07:49.401 "compare": false, 00:07:49.401 "compare_and_write": false, 00:07:49.401 "abort": true, 00:07:49.401 "seek_hole": false, 00:07:49.401 "seek_data": false, 00:07:49.401 "copy": true, 00:07:49.401 "nvme_iov_md": false 00:07:49.401 }, 00:07:49.401 "memory_domains": [ 00:07:49.401 { 00:07:49.401 "dma_device_id": "system", 00:07:49.401 "dma_device_type": 1 00:07:49.401 }, 00:07:49.401 { 00:07:49.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.401 "dma_device_type": 2 00:07:49.401 } 00:07:49.401 ], 00:07:49.401 "driver_specific": {} 00:07:49.401 } 00:07:49.401 ] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.401 "name": "Existed_Raid", 00:07:49.401 "uuid": "22731309-e365-47b2-9a9d-3ba911abaaef", 00:07:49.401 "strip_size_kb": 64, 00:07:49.401 "state": "configuring", 00:07:49.401 "raid_level": "concat", 00:07:49.401 "superblock": true, 00:07:49.401 "num_base_bdevs": 3, 00:07:49.401 "num_base_bdevs_discovered": 1, 00:07:49.401 "num_base_bdevs_operational": 3, 00:07:49.401 "base_bdevs_list": [ 00:07:49.401 { 00:07:49.401 "name": "BaseBdev1", 00:07:49.401 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:49.401 "is_configured": true, 00:07:49.401 "data_offset": 2048, 00:07:49.401 "data_size": 63488 00:07:49.401 }, 00:07:49.401 { 00:07:49.401 "name": "BaseBdev2", 00:07:49.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.401 "is_configured": false, 00:07:49.401 "data_offset": 0, 00:07:49.401 "data_size": 0 00:07:49.401 }, 00:07:49.401 { 00:07:49.401 "name": "BaseBdev3", 00:07:49.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.401 "is_configured": false, 00:07:49.401 "data_offset": 0, 00:07:49.401 "data_size": 0 00:07:49.401 } 00:07:49.401 ] 00:07:49.401 }' 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.401 09:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.663 [2024-10-30 09:42:28.246433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.663 [2024-10-30 09:42:28.246482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.663 [2024-10-30 09:42:28.254489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.663 [2024-10-30 09:42:28.256368] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.663 [2024-10-30 09:42:28.256407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.663 [2024-10-30 09:42:28.256417] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.663 [2024-10-30 09:42:28.256428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.663 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.664 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.927 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.927 "name": "Existed_Raid", 00:07:49.927 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:49.927 "strip_size_kb": 64, 00:07:49.927 "state": "configuring", 00:07:49.927 "raid_level": "concat", 00:07:49.927 "superblock": true, 00:07:49.927 "num_base_bdevs": 3, 00:07:49.927 "num_base_bdevs_discovered": 1, 00:07:49.927 "num_base_bdevs_operational": 3, 00:07:49.927 "base_bdevs_list": [ 00:07:49.927 { 00:07:49.927 "name": "BaseBdev1", 00:07:49.927 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:49.927 "is_configured": true, 00:07:49.927 "data_offset": 2048, 00:07:49.927 "data_size": 63488 00:07:49.927 }, 00:07:49.927 { 00:07:49.927 "name": "BaseBdev2", 00:07:49.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.927 "is_configured": false, 00:07:49.927 "data_offset": 0, 00:07:49.927 "data_size": 0 00:07:49.927 }, 00:07:49.927 { 00:07:49.927 "name": "BaseBdev3", 00:07:49.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.927 "is_configured": false, 00:07:49.927 "data_offset": 0, 00:07:49.927 "data_size": 0 00:07:49.927 } 00:07:49.927 ] 00:07:49.927 }' 00:07:49.927 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.927 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 [2024-10-30 09:42:28.597100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.190 BaseBdev2 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 [ 00:07:50.190 { 00:07:50.190 "name": "BaseBdev2", 00:07:50.190 "aliases": [ 00:07:50.190 "ac49914e-a1ea-442d-b570-60aa45763a8d" 00:07:50.190 ], 00:07:50.190 "product_name": "Malloc disk", 00:07:50.190 "block_size": 512, 00:07:50.190 "num_blocks": 65536, 00:07:50.190 "uuid": "ac49914e-a1ea-442d-b570-60aa45763a8d", 00:07:50.190 "assigned_rate_limits": { 00:07:50.190 "rw_ios_per_sec": 0, 00:07:50.190 "rw_mbytes_per_sec": 0, 00:07:50.190 "r_mbytes_per_sec": 0, 00:07:50.190 "w_mbytes_per_sec": 0 00:07:50.190 }, 00:07:50.190 "claimed": true, 00:07:50.190 "claim_type": "exclusive_write", 00:07:50.190 "zoned": false, 00:07:50.190 "supported_io_types": { 00:07:50.190 "read": true, 00:07:50.190 "write": true, 00:07:50.190 "unmap": true, 00:07:50.190 "flush": true, 00:07:50.190 "reset": true, 00:07:50.190 "nvme_admin": false, 00:07:50.190 "nvme_io": false, 00:07:50.190 "nvme_io_md": false, 00:07:50.190 "write_zeroes": true, 00:07:50.190 "zcopy": true, 00:07:50.190 "get_zone_info": false, 00:07:50.190 "zone_management": false, 00:07:50.190 "zone_append": false, 00:07:50.190 "compare": false, 00:07:50.190 "compare_and_write": false, 00:07:50.190 "abort": true, 00:07:50.190 "seek_hole": false, 00:07:50.190 "seek_data": false, 00:07:50.190 "copy": true, 00:07:50.190 "nvme_iov_md": false 00:07:50.190 }, 00:07:50.190 "memory_domains": [ 00:07:50.190 { 00:07:50.190 "dma_device_id": "system", 00:07:50.190 "dma_device_type": 1 00:07:50.190 }, 00:07:50.190 { 00:07:50.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.190 "dma_device_type": 2 00:07:50.190 } 00:07:50.190 ], 00:07:50.190 "driver_specific": {} 00:07:50.190 } 00:07:50.190 ] 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.191 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.191 "name": "Existed_Raid", 00:07:50.191 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:50.191 "strip_size_kb": 64, 00:07:50.191 "state": "configuring", 00:07:50.191 "raid_level": "concat", 00:07:50.191 "superblock": true, 00:07:50.191 "num_base_bdevs": 3, 00:07:50.191 "num_base_bdevs_discovered": 2, 00:07:50.191 "num_base_bdevs_operational": 3, 00:07:50.191 "base_bdevs_list": [ 00:07:50.191 { 00:07:50.191 "name": "BaseBdev1", 00:07:50.191 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:50.191 "is_configured": true, 00:07:50.191 "data_offset": 2048, 00:07:50.191 "data_size": 63488 00:07:50.191 }, 00:07:50.191 { 00:07:50.191 "name": "BaseBdev2", 00:07:50.191 "uuid": "ac49914e-a1ea-442d-b570-60aa45763a8d", 00:07:50.191 "is_configured": true, 00:07:50.191 "data_offset": 2048, 00:07:50.191 "data_size": 63488 00:07:50.191 }, 00:07:50.191 { 00:07:50.191 "name": "BaseBdev3", 00:07:50.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.191 "is_configured": false, 00:07:50.191 "data_offset": 0, 00:07:50.191 "data_size": 0 00:07:50.191 } 00:07:50.191 ] 00:07:50.191 }' 00:07:50.191 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.191 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.453 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:50.453 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.453 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.453 [2024-10-30 09:42:29.004555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.453 [2024-10-30 09:42:29.004781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.453 [2024-10-30 09:42:29.004801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:50.453 [2024-10-30 09:42:29.005078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.453 [2024-10-30 09:42:29.005215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.454 [2024-10-30 09:42:29.005224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.454 [2024-10-30 09:42:29.005351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.454 BaseBdev3 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.454 [ 00:07:50.454 { 00:07:50.454 "name": "BaseBdev3", 00:07:50.454 "aliases": [ 00:07:50.454 "67138b94-bac1-4b27-af24-f92a4323a0c5" 00:07:50.454 ], 00:07:50.454 "product_name": "Malloc disk", 00:07:50.454 "block_size": 512, 00:07:50.454 "num_blocks": 65536, 00:07:50.454 "uuid": "67138b94-bac1-4b27-af24-f92a4323a0c5", 00:07:50.454 "assigned_rate_limits": { 00:07:50.454 "rw_ios_per_sec": 0, 00:07:50.454 "rw_mbytes_per_sec": 0, 00:07:50.454 "r_mbytes_per_sec": 0, 00:07:50.454 "w_mbytes_per_sec": 0 00:07:50.454 }, 00:07:50.454 "claimed": true, 00:07:50.454 "claim_type": "exclusive_write", 00:07:50.454 "zoned": false, 00:07:50.454 "supported_io_types": { 00:07:50.454 "read": true, 00:07:50.454 "write": true, 00:07:50.454 "unmap": true, 00:07:50.454 "flush": true, 00:07:50.454 "reset": true, 00:07:50.454 "nvme_admin": false, 00:07:50.454 "nvme_io": false, 00:07:50.454 "nvme_io_md": false, 00:07:50.454 "write_zeroes": true, 00:07:50.454 "zcopy": true, 00:07:50.454 "get_zone_info": false, 00:07:50.454 "zone_management": false, 00:07:50.454 "zone_append": false, 00:07:50.454 "compare": false, 00:07:50.454 "compare_and_write": false, 00:07:50.454 "abort": true, 00:07:50.454 "seek_hole": false, 00:07:50.454 "seek_data": false, 00:07:50.454 "copy": true, 00:07:50.454 "nvme_iov_md": false 00:07:50.454 }, 00:07:50.454 "memory_domains": [ 00:07:50.454 { 00:07:50.454 "dma_device_id": "system", 00:07:50.454 "dma_device_type": 1 00:07:50.454 }, 00:07:50.454 { 00:07:50.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.454 "dma_device_type": 2 00:07:50.454 } 00:07:50.454 ], 00:07:50.454 "driver_specific": {} 00:07:50.454 } 00:07:50.454 ] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.454 "name": "Existed_Raid", 00:07:50.454 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:50.454 "strip_size_kb": 64, 00:07:50.454 "state": "online", 00:07:50.454 "raid_level": "concat", 00:07:50.454 "superblock": true, 00:07:50.454 "num_base_bdevs": 3, 00:07:50.454 "num_base_bdevs_discovered": 3, 00:07:50.454 "num_base_bdevs_operational": 3, 00:07:50.454 "base_bdevs_list": [ 00:07:50.454 { 00:07:50.454 "name": "BaseBdev1", 00:07:50.454 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:50.454 "is_configured": true, 00:07:50.454 "data_offset": 2048, 00:07:50.454 "data_size": 63488 00:07:50.454 }, 00:07:50.454 { 00:07:50.454 "name": "BaseBdev2", 00:07:50.454 "uuid": "ac49914e-a1ea-442d-b570-60aa45763a8d", 00:07:50.454 "is_configured": true, 00:07:50.454 "data_offset": 2048, 00:07:50.454 "data_size": 63488 00:07:50.454 }, 00:07:50.454 { 00:07:50.454 "name": "BaseBdev3", 00:07:50.454 "uuid": "67138b94-bac1-4b27-af24-f92a4323a0c5", 00:07:50.454 "is_configured": true, 00:07:50.454 "data_offset": 2048, 00:07:50.454 "data_size": 63488 00:07:50.454 } 00:07:50.454 ] 00:07:50.454 }' 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.454 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.715 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.976 [2024-10-30 09:42:29.337016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.976 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.976 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.976 "name": "Existed_Raid", 00:07:50.976 "aliases": [ 00:07:50.976 "4adb4f24-9c10-4cf9-8af4-09eed877a4d1" 00:07:50.976 ], 00:07:50.976 "product_name": "Raid Volume", 00:07:50.976 "block_size": 512, 00:07:50.976 "num_blocks": 190464, 00:07:50.976 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:50.976 "assigned_rate_limits": { 00:07:50.976 "rw_ios_per_sec": 0, 00:07:50.976 "rw_mbytes_per_sec": 0, 00:07:50.976 "r_mbytes_per_sec": 0, 00:07:50.976 "w_mbytes_per_sec": 0 00:07:50.976 }, 00:07:50.976 "claimed": false, 00:07:50.976 "zoned": false, 00:07:50.976 "supported_io_types": { 00:07:50.976 "read": true, 00:07:50.976 "write": true, 00:07:50.976 "unmap": true, 00:07:50.976 "flush": true, 00:07:50.976 "reset": true, 00:07:50.976 "nvme_admin": false, 00:07:50.976 "nvme_io": false, 00:07:50.976 "nvme_io_md": false, 00:07:50.976 "write_zeroes": true, 00:07:50.976 "zcopy": false, 00:07:50.976 "get_zone_info": false, 00:07:50.976 "zone_management": false, 00:07:50.976 "zone_append": false, 00:07:50.976 "compare": false, 00:07:50.976 "compare_and_write": false, 00:07:50.976 "abort": false, 00:07:50.976 "seek_hole": false, 00:07:50.976 "seek_data": false, 00:07:50.976 "copy": false, 00:07:50.976 "nvme_iov_md": false 00:07:50.976 }, 00:07:50.976 "memory_domains": [ 00:07:50.976 { 00:07:50.976 "dma_device_id": "system", 00:07:50.976 "dma_device_type": 1 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.976 "dma_device_type": 2 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "dma_device_id": "system", 00:07:50.976 "dma_device_type": 1 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.976 "dma_device_type": 2 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "dma_device_id": "system", 00:07:50.976 "dma_device_type": 1 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.976 "dma_device_type": 2 00:07:50.976 } 00:07:50.976 ], 00:07:50.976 "driver_specific": { 00:07:50.976 "raid": { 00:07:50.976 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:50.976 "strip_size_kb": 64, 00:07:50.976 "state": "online", 00:07:50.976 "raid_level": "concat", 00:07:50.976 "superblock": true, 00:07:50.976 "num_base_bdevs": 3, 00:07:50.976 "num_base_bdevs_discovered": 3, 00:07:50.976 "num_base_bdevs_operational": 3, 00:07:50.976 "base_bdevs_list": [ 00:07:50.976 { 00:07:50.976 "name": "BaseBdev1", 00:07:50.976 "uuid": "71f6e377-ee00-4eb2-b3df-1b1f42c5f9ca", 00:07:50.976 "is_configured": true, 00:07:50.976 "data_offset": 2048, 00:07:50.976 "data_size": 63488 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "name": "BaseBdev2", 00:07:50.976 "uuid": "ac49914e-a1ea-442d-b570-60aa45763a8d", 00:07:50.976 "is_configured": true, 00:07:50.976 "data_offset": 2048, 00:07:50.976 "data_size": 63488 00:07:50.976 }, 00:07:50.976 { 00:07:50.976 "name": "BaseBdev3", 00:07:50.976 "uuid": "67138b94-bac1-4b27-af24-f92a4323a0c5", 00:07:50.976 "is_configured": true, 00:07:50.976 "data_offset": 2048, 00:07:50.977 "data_size": 63488 00:07:50.977 } 00:07:50.977 ] 00:07:50.977 } 00:07:50.977 } 00:07:50.977 }' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.977 BaseBdev2 00:07:50.977 BaseBdev3' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.977 [2024-10-30 09:42:29.524755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.977 [2024-10-30 09:42:29.524783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.977 [2024-10-30 09:42:29.524833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.977 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.238 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.238 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.238 "name": "Existed_Raid", 00:07:51.238 "uuid": "4adb4f24-9c10-4cf9-8af4-09eed877a4d1", 00:07:51.238 "strip_size_kb": 64, 00:07:51.238 "state": "offline", 00:07:51.238 "raid_level": "concat", 00:07:51.238 "superblock": true, 00:07:51.238 "num_base_bdevs": 3, 00:07:51.238 "num_base_bdevs_discovered": 2, 00:07:51.238 "num_base_bdevs_operational": 2, 00:07:51.238 "base_bdevs_list": [ 00:07:51.238 { 00:07:51.238 "name": null, 00:07:51.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.238 "is_configured": false, 00:07:51.238 "data_offset": 0, 00:07:51.238 "data_size": 63488 00:07:51.238 }, 00:07:51.238 { 00:07:51.238 "name": "BaseBdev2", 00:07:51.238 "uuid": "ac49914e-a1ea-442d-b570-60aa45763a8d", 00:07:51.238 "is_configured": true, 00:07:51.238 "data_offset": 2048, 00:07:51.238 "data_size": 63488 00:07:51.238 }, 00:07:51.238 { 00:07:51.238 "name": "BaseBdev3", 00:07:51.238 "uuid": "67138b94-bac1-4b27-af24-f92a4323a0c5", 00:07:51.238 "is_configured": true, 00:07:51.238 "data_offset": 2048, 00:07:51.238 "data_size": 63488 00:07:51.238 } 00:07:51.238 ] 00:07:51.238 }' 00:07:51.238 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.238 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 [2024-10-30 09:42:29.922811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 [2024-10-30 09:42:30.016818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:51.500 [2024-10-30 09:42:30.016867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.500 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 BaseBdev2 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 [ 00:07:51.762 { 00:07:51.762 "name": "BaseBdev2", 00:07:51.762 "aliases": [ 00:07:51.762 "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0" 00:07:51.762 ], 00:07:51.762 "product_name": "Malloc disk", 00:07:51.762 "block_size": 512, 00:07:51.762 "num_blocks": 65536, 00:07:51.762 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:51.762 "assigned_rate_limits": { 00:07:51.762 "rw_ios_per_sec": 0, 00:07:51.762 "rw_mbytes_per_sec": 0, 00:07:51.762 "r_mbytes_per_sec": 0, 00:07:51.762 "w_mbytes_per_sec": 0 00:07:51.762 }, 00:07:51.762 "claimed": false, 00:07:51.762 "zoned": false, 00:07:51.762 "supported_io_types": { 00:07:51.762 "read": true, 00:07:51.762 "write": true, 00:07:51.762 "unmap": true, 00:07:51.762 "flush": true, 00:07:51.762 "reset": true, 00:07:51.762 "nvme_admin": false, 00:07:51.762 "nvme_io": false, 00:07:51.762 "nvme_io_md": false, 00:07:51.762 "write_zeroes": true, 00:07:51.762 "zcopy": true, 00:07:51.762 "get_zone_info": false, 00:07:51.762 "zone_management": false, 00:07:51.762 "zone_append": false, 00:07:51.762 "compare": false, 00:07:51.762 "compare_and_write": false, 00:07:51.762 "abort": true, 00:07:51.762 "seek_hole": false, 00:07:51.762 "seek_data": false, 00:07:51.762 "copy": true, 00:07:51.762 "nvme_iov_md": false 00:07:51.762 }, 00:07:51.762 "memory_domains": [ 00:07:51.762 { 00:07:51.762 "dma_device_id": "system", 00:07:51.762 "dma_device_type": 1 00:07:51.762 }, 00:07:51.762 { 00:07:51.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.762 "dma_device_type": 2 00:07:51.762 } 00:07:51.762 ], 00:07:51.762 "driver_specific": {} 00:07:51.762 } 00:07:51.762 ] 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:51.762 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.763 BaseBdev3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.763 [ 00:07:51.763 { 00:07:51.763 "name": "BaseBdev3", 00:07:51.763 "aliases": [ 00:07:51.763 "ac788167-40db-42bf-b639-7015c90f0051" 00:07:51.763 ], 00:07:51.763 "product_name": "Malloc disk", 00:07:51.763 "block_size": 512, 00:07:51.763 "num_blocks": 65536, 00:07:51.763 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:51.763 "assigned_rate_limits": { 00:07:51.763 "rw_ios_per_sec": 0, 00:07:51.763 "rw_mbytes_per_sec": 0, 00:07:51.763 "r_mbytes_per_sec": 0, 00:07:51.763 "w_mbytes_per_sec": 0 00:07:51.763 }, 00:07:51.763 "claimed": false, 00:07:51.763 "zoned": false, 00:07:51.763 "supported_io_types": { 00:07:51.763 "read": true, 00:07:51.763 "write": true, 00:07:51.763 "unmap": true, 00:07:51.763 "flush": true, 00:07:51.763 "reset": true, 00:07:51.763 "nvme_admin": false, 00:07:51.763 "nvme_io": false, 00:07:51.763 "nvme_io_md": false, 00:07:51.763 "write_zeroes": true, 00:07:51.763 "zcopy": true, 00:07:51.763 "get_zone_info": false, 00:07:51.763 "zone_management": false, 00:07:51.763 "zone_append": false, 00:07:51.763 "compare": false, 00:07:51.763 "compare_and_write": false, 00:07:51.763 "abort": true, 00:07:51.763 "seek_hole": false, 00:07:51.763 "seek_data": false, 00:07:51.763 "copy": true, 00:07:51.763 "nvme_iov_md": false 00:07:51.763 }, 00:07:51.763 "memory_domains": [ 00:07:51.763 { 00:07:51.763 "dma_device_id": "system", 00:07:51.763 "dma_device_type": 1 00:07:51.763 }, 00:07:51.763 { 00:07:51.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.763 "dma_device_type": 2 00:07:51.763 } 00:07:51.763 ], 00:07:51.763 "driver_specific": {} 00:07:51.763 } 00:07:51.763 ] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.763 [2024-10-30 09:42:30.220074] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.763 [2024-10-30 09:42:30.220117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.763 [2024-10-30 09:42:30.220138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.763 [2024-10-30 09:42:30.222002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.763 "name": "Existed_Raid", 00:07:51.763 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:51.763 "strip_size_kb": 64, 00:07:51.763 "state": "configuring", 00:07:51.763 "raid_level": "concat", 00:07:51.763 "superblock": true, 00:07:51.763 "num_base_bdevs": 3, 00:07:51.763 "num_base_bdevs_discovered": 2, 00:07:51.763 "num_base_bdevs_operational": 3, 00:07:51.763 "base_bdevs_list": [ 00:07:51.763 { 00:07:51.763 "name": "BaseBdev1", 00:07:51.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.763 "is_configured": false, 00:07:51.763 "data_offset": 0, 00:07:51.763 "data_size": 0 00:07:51.763 }, 00:07:51.763 { 00:07:51.763 "name": "BaseBdev2", 00:07:51.763 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:51.763 "is_configured": true, 00:07:51.763 "data_offset": 2048, 00:07:51.763 "data_size": 63488 00:07:51.763 }, 00:07:51.763 { 00:07:51.763 "name": "BaseBdev3", 00:07:51.763 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:51.763 "is_configured": true, 00:07:51.763 "data_offset": 2048, 00:07:51.763 "data_size": 63488 00:07:51.763 } 00:07:51.763 ] 00:07:51.763 }' 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.763 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.024 [2024-10-30 09:42:30.540131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.024 "name": "Existed_Raid", 00:07:52.024 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:52.024 "strip_size_kb": 64, 00:07:52.024 "state": "configuring", 00:07:52.024 "raid_level": "concat", 00:07:52.024 "superblock": true, 00:07:52.024 "num_base_bdevs": 3, 00:07:52.024 "num_base_bdevs_discovered": 1, 00:07:52.024 "num_base_bdevs_operational": 3, 00:07:52.024 "base_bdevs_list": [ 00:07:52.024 { 00:07:52.024 "name": "BaseBdev1", 00:07:52.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.024 "is_configured": false, 00:07:52.024 "data_offset": 0, 00:07:52.024 "data_size": 0 00:07:52.024 }, 00:07:52.024 { 00:07:52.024 "name": null, 00:07:52.024 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:52.024 "is_configured": false, 00:07:52.024 "data_offset": 0, 00:07:52.024 "data_size": 63488 00:07:52.024 }, 00:07:52.024 { 00:07:52.024 "name": "BaseBdev3", 00:07:52.024 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:52.024 "is_configured": true, 00:07:52.024 "data_offset": 2048, 00:07:52.024 "data_size": 63488 00:07:52.024 } 00:07:52.024 ] 00:07:52.024 }' 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.024 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.286 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.547 [2024-10-30 09:42:30.918865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.547 BaseBdev1 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.547 [ 00:07:52.547 { 00:07:52.547 "name": "BaseBdev1", 00:07:52.547 "aliases": [ 00:07:52.547 "f1c2d63a-cef7-470a-b153-8c0203c758d5" 00:07:52.547 ], 00:07:52.547 "product_name": "Malloc disk", 00:07:52.547 "block_size": 512, 00:07:52.547 "num_blocks": 65536, 00:07:52.547 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:52.547 "assigned_rate_limits": { 00:07:52.547 "rw_ios_per_sec": 0, 00:07:52.547 "rw_mbytes_per_sec": 0, 00:07:52.547 "r_mbytes_per_sec": 0, 00:07:52.547 "w_mbytes_per_sec": 0 00:07:52.547 }, 00:07:52.547 "claimed": true, 00:07:52.547 "claim_type": "exclusive_write", 00:07:52.547 "zoned": false, 00:07:52.547 "supported_io_types": { 00:07:52.547 "read": true, 00:07:52.547 "write": true, 00:07:52.547 "unmap": true, 00:07:52.547 "flush": true, 00:07:52.547 "reset": true, 00:07:52.547 "nvme_admin": false, 00:07:52.547 "nvme_io": false, 00:07:52.547 "nvme_io_md": false, 00:07:52.547 "write_zeroes": true, 00:07:52.547 "zcopy": true, 00:07:52.547 "get_zone_info": false, 00:07:52.547 "zone_management": false, 00:07:52.547 "zone_append": false, 00:07:52.547 "compare": false, 00:07:52.547 "compare_and_write": false, 00:07:52.547 "abort": true, 00:07:52.547 "seek_hole": false, 00:07:52.547 "seek_data": false, 00:07:52.547 "copy": true, 00:07:52.547 "nvme_iov_md": false 00:07:52.547 }, 00:07:52.547 "memory_domains": [ 00:07:52.547 { 00:07:52.547 "dma_device_id": "system", 00:07:52.547 "dma_device_type": 1 00:07:52.547 }, 00:07:52.547 { 00:07:52.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.547 "dma_device_type": 2 00:07:52.547 } 00:07:52.547 ], 00:07:52.547 "driver_specific": {} 00:07:52.547 } 00:07:52.547 ] 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.547 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.548 "name": "Existed_Raid", 00:07:52.548 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:52.548 "strip_size_kb": 64, 00:07:52.548 "state": "configuring", 00:07:52.548 "raid_level": "concat", 00:07:52.548 "superblock": true, 00:07:52.548 "num_base_bdevs": 3, 00:07:52.548 "num_base_bdevs_discovered": 2, 00:07:52.548 "num_base_bdevs_operational": 3, 00:07:52.548 "base_bdevs_list": [ 00:07:52.548 { 00:07:52.548 "name": "BaseBdev1", 00:07:52.548 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:52.548 "is_configured": true, 00:07:52.548 "data_offset": 2048, 00:07:52.548 "data_size": 63488 00:07:52.548 }, 00:07:52.548 { 00:07:52.548 "name": null, 00:07:52.548 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:52.548 "is_configured": false, 00:07:52.548 "data_offset": 0, 00:07:52.548 "data_size": 63488 00:07:52.548 }, 00:07:52.548 { 00:07:52.548 "name": "BaseBdev3", 00:07:52.548 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:52.548 "is_configured": true, 00:07:52.548 "data_offset": 2048, 00:07:52.548 "data_size": 63488 00:07:52.548 } 00:07:52.548 ] 00:07:52.548 }' 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.548 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.807 [2024-10-30 09:42:31.283003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.807 "name": "Existed_Raid", 00:07:52.807 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:52.807 "strip_size_kb": 64, 00:07:52.807 "state": "configuring", 00:07:52.807 "raid_level": "concat", 00:07:52.807 "superblock": true, 00:07:52.807 "num_base_bdevs": 3, 00:07:52.807 "num_base_bdevs_discovered": 1, 00:07:52.807 "num_base_bdevs_operational": 3, 00:07:52.807 "base_bdevs_list": [ 00:07:52.807 { 00:07:52.807 "name": "BaseBdev1", 00:07:52.807 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:52.807 "is_configured": true, 00:07:52.807 "data_offset": 2048, 00:07:52.807 "data_size": 63488 00:07:52.807 }, 00:07:52.807 { 00:07:52.807 "name": null, 00:07:52.807 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:52.807 "is_configured": false, 00:07:52.807 "data_offset": 0, 00:07:52.807 "data_size": 63488 00:07:52.807 }, 00:07:52.807 { 00:07:52.807 "name": null, 00:07:52.807 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:52.807 "is_configured": false, 00:07:52.807 "data_offset": 0, 00:07:52.807 "data_size": 63488 00:07:52.807 } 00:07:52.807 ] 00:07:52.807 }' 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.807 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.068 [2024-10-30 09:42:31.639131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.068 "name": "Existed_Raid", 00:07:53.068 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:53.068 "strip_size_kb": 64, 00:07:53.068 "state": "configuring", 00:07:53.068 "raid_level": "concat", 00:07:53.068 "superblock": true, 00:07:53.068 "num_base_bdevs": 3, 00:07:53.068 "num_base_bdevs_discovered": 2, 00:07:53.068 "num_base_bdevs_operational": 3, 00:07:53.068 "base_bdevs_list": [ 00:07:53.068 { 00:07:53.068 "name": "BaseBdev1", 00:07:53.068 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:53.068 "is_configured": true, 00:07:53.068 "data_offset": 2048, 00:07:53.068 "data_size": 63488 00:07:53.068 }, 00:07:53.068 { 00:07:53.068 "name": null, 00:07:53.068 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:53.068 "is_configured": false, 00:07:53.068 "data_offset": 0, 00:07:53.068 "data_size": 63488 00:07:53.068 }, 00:07:53.068 { 00:07:53.068 "name": "BaseBdev3", 00:07:53.068 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:53.068 "is_configured": true, 00:07:53.068 "data_offset": 2048, 00:07:53.068 "data_size": 63488 00:07:53.068 } 00:07:53.068 ] 00:07:53.068 }' 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.068 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.640 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.640 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.640 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.640 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:53.640 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.640 [2024-10-30 09:42:32.007226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.640 "name": "Existed_Raid", 00:07:53.640 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:53.640 "strip_size_kb": 64, 00:07:53.640 "state": "configuring", 00:07:53.640 "raid_level": "concat", 00:07:53.640 "superblock": true, 00:07:53.640 "num_base_bdevs": 3, 00:07:53.640 "num_base_bdevs_discovered": 1, 00:07:53.640 "num_base_bdevs_operational": 3, 00:07:53.640 "base_bdevs_list": [ 00:07:53.640 { 00:07:53.640 "name": null, 00:07:53.640 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:53.640 "is_configured": false, 00:07:53.640 "data_offset": 0, 00:07:53.640 "data_size": 63488 00:07:53.640 }, 00:07:53.640 { 00:07:53.640 "name": null, 00:07:53.640 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:53.640 "is_configured": false, 00:07:53.640 "data_offset": 0, 00:07:53.640 "data_size": 63488 00:07:53.640 }, 00:07:53.640 { 00:07:53.640 "name": "BaseBdev3", 00:07:53.640 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:53.640 "is_configured": true, 00:07:53.640 "data_offset": 2048, 00:07:53.640 "data_size": 63488 00:07:53.640 } 00:07:53.640 ] 00:07:53.640 }' 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.640 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.902 [2024-10-30 09:42:32.422198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.902 "name": "Existed_Raid", 00:07:53.902 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:53.902 "strip_size_kb": 64, 00:07:53.902 "state": "configuring", 00:07:53.902 "raid_level": "concat", 00:07:53.902 "superblock": true, 00:07:53.902 "num_base_bdevs": 3, 00:07:53.902 "num_base_bdevs_discovered": 2, 00:07:53.902 "num_base_bdevs_operational": 3, 00:07:53.902 "base_bdevs_list": [ 00:07:53.902 { 00:07:53.902 "name": null, 00:07:53.902 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:53.902 "is_configured": false, 00:07:53.902 "data_offset": 0, 00:07:53.902 "data_size": 63488 00:07:53.902 }, 00:07:53.902 { 00:07:53.902 "name": "BaseBdev2", 00:07:53.902 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:53.902 "is_configured": true, 00:07:53.902 "data_offset": 2048, 00:07:53.902 "data_size": 63488 00:07:53.902 }, 00:07:53.902 { 00:07:53.902 "name": "BaseBdev3", 00:07:53.902 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:53.902 "is_configured": true, 00:07:53.902 "data_offset": 2048, 00:07:53.902 "data_size": 63488 00:07:53.902 } 00:07:53.902 ] 00:07:53.902 }' 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.902 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.164 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1c2d63a-cef7-470a-b153-8c0203c758d5 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.426 [2024-10-30 09:42:32.832553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:54.426 [2024-10-30 09:42:32.832742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:54.426 [2024-10-30 09:42:32.832757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:54.426 [2024-10-30 09:42:32.832997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.426 [2024-10-30 09:42:32.833152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:54.426 [2024-10-30 09:42:32.833161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:54.426 [2024-10-30 09:42:32.833279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.426 NewBaseBdev 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.426 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.426 [ 00:07:54.426 { 00:07:54.426 "name": "NewBaseBdev", 00:07:54.426 "aliases": [ 00:07:54.426 "f1c2d63a-cef7-470a-b153-8c0203c758d5" 00:07:54.426 ], 00:07:54.426 "product_name": "Malloc disk", 00:07:54.426 "block_size": 512, 00:07:54.426 "num_blocks": 65536, 00:07:54.426 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:54.426 "assigned_rate_limits": { 00:07:54.426 "rw_ios_per_sec": 0, 00:07:54.426 "rw_mbytes_per_sec": 0, 00:07:54.426 "r_mbytes_per_sec": 0, 00:07:54.427 "w_mbytes_per_sec": 0 00:07:54.427 }, 00:07:54.427 "claimed": true, 00:07:54.427 "claim_type": "exclusive_write", 00:07:54.427 "zoned": false, 00:07:54.427 "supported_io_types": { 00:07:54.427 "read": true, 00:07:54.427 "write": true, 00:07:54.427 "unmap": true, 00:07:54.427 "flush": true, 00:07:54.427 "reset": true, 00:07:54.427 "nvme_admin": false, 00:07:54.427 "nvme_io": false, 00:07:54.427 "nvme_io_md": false, 00:07:54.427 "write_zeroes": true, 00:07:54.427 "zcopy": true, 00:07:54.427 "get_zone_info": false, 00:07:54.427 "zone_management": false, 00:07:54.427 "zone_append": false, 00:07:54.427 "compare": false, 00:07:54.427 "compare_and_write": false, 00:07:54.427 "abort": true, 00:07:54.427 "seek_hole": false, 00:07:54.427 "seek_data": false, 00:07:54.427 "copy": true, 00:07:54.427 "nvme_iov_md": false 00:07:54.427 }, 00:07:54.427 "memory_domains": [ 00:07:54.427 { 00:07:54.427 "dma_device_id": "system", 00:07:54.427 "dma_device_type": 1 00:07:54.427 }, 00:07:54.427 { 00:07:54.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.427 "dma_device_type": 2 00:07:54.427 } 00:07:54.427 ], 00:07:54.427 "driver_specific": {} 00:07:54.427 } 00:07:54.427 ] 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.427 "name": "Existed_Raid", 00:07:54.427 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:54.427 "strip_size_kb": 64, 00:07:54.427 "state": "online", 00:07:54.427 "raid_level": "concat", 00:07:54.427 "superblock": true, 00:07:54.427 "num_base_bdevs": 3, 00:07:54.427 "num_base_bdevs_discovered": 3, 00:07:54.427 "num_base_bdevs_operational": 3, 00:07:54.427 "base_bdevs_list": [ 00:07:54.427 { 00:07:54.427 "name": "NewBaseBdev", 00:07:54.427 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:54.427 "is_configured": true, 00:07:54.427 "data_offset": 2048, 00:07:54.427 "data_size": 63488 00:07:54.427 }, 00:07:54.427 { 00:07:54.427 "name": "BaseBdev2", 00:07:54.427 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:54.427 "is_configured": true, 00:07:54.427 "data_offset": 2048, 00:07:54.427 "data_size": 63488 00:07:54.427 }, 00:07:54.427 { 00:07:54.427 "name": "BaseBdev3", 00:07:54.427 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:54.427 "is_configured": true, 00:07:54.427 "data_offset": 2048, 00:07:54.427 "data_size": 63488 00:07:54.427 } 00:07:54.427 ] 00:07:54.427 }' 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.427 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.689 [2024-10-30 09:42:33.153022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.689 "name": "Existed_Raid", 00:07:54.689 "aliases": [ 00:07:54.689 "226ec695-a411-4144-8450-1e907630365b" 00:07:54.689 ], 00:07:54.689 "product_name": "Raid Volume", 00:07:54.689 "block_size": 512, 00:07:54.689 "num_blocks": 190464, 00:07:54.689 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:54.689 "assigned_rate_limits": { 00:07:54.689 "rw_ios_per_sec": 0, 00:07:54.689 "rw_mbytes_per_sec": 0, 00:07:54.689 "r_mbytes_per_sec": 0, 00:07:54.689 "w_mbytes_per_sec": 0 00:07:54.689 }, 00:07:54.689 "claimed": false, 00:07:54.689 "zoned": false, 00:07:54.689 "supported_io_types": { 00:07:54.689 "read": true, 00:07:54.689 "write": true, 00:07:54.689 "unmap": true, 00:07:54.689 "flush": true, 00:07:54.689 "reset": true, 00:07:54.689 "nvme_admin": false, 00:07:54.689 "nvme_io": false, 00:07:54.689 "nvme_io_md": false, 00:07:54.689 "write_zeroes": true, 00:07:54.689 "zcopy": false, 00:07:54.689 "get_zone_info": false, 00:07:54.689 "zone_management": false, 00:07:54.689 "zone_append": false, 00:07:54.689 "compare": false, 00:07:54.689 "compare_and_write": false, 00:07:54.689 "abort": false, 00:07:54.689 "seek_hole": false, 00:07:54.689 "seek_data": false, 00:07:54.689 "copy": false, 00:07:54.689 "nvme_iov_md": false 00:07:54.689 }, 00:07:54.689 "memory_domains": [ 00:07:54.689 { 00:07:54.689 "dma_device_id": "system", 00:07:54.689 "dma_device_type": 1 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.689 "dma_device_type": 2 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "dma_device_id": "system", 00:07:54.689 "dma_device_type": 1 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.689 "dma_device_type": 2 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "dma_device_id": "system", 00:07:54.689 "dma_device_type": 1 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.689 "dma_device_type": 2 00:07:54.689 } 00:07:54.689 ], 00:07:54.689 "driver_specific": { 00:07:54.689 "raid": { 00:07:54.689 "uuid": "226ec695-a411-4144-8450-1e907630365b", 00:07:54.689 "strip_size_kb": 64, 00:07:54.689 "state": "online", 00:07:54.689 "raid_level": "concat", 00:07:54.689 "superblock": true, 00:07:54.689 "num_base_bdevs": 3, 00:07:54.689 "num_base_bdevs_discovered": 3, 00:07:54.689 "num_base_bdevs_operational": 3, 00:07:54.689 "base_bdevs_list": [ 00:07:54.689 { 00:07:54.689 "name": "NewBaseBdev", 00:07:54.689 "uuid": "f1c2d63a-cef7-470a-b153-8c0203c758d5", 00:07:54.689 "is_configured": true, 00:07:54.689 "data_offset": 2048, 00:07:54.689 "data_size": 63488 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "name": "BaseBdev2", 00:07:54.689 "uuid": "f66fbae6-eb2f-4880-8c0f-ab8ce62af2d0", 00:07:54.689 "is_configured": true, 00:07:54.689 "data_offset": 2048, 00:07:54.689 "data_size": 63488 00:07:54.689 }, 00:07:54.689 { 00:07:54.689 "name": "BaseBdev3", 00:07:54.689 "uuid": "ac788167-40db-42bf-b639-7015c90f0051", 00:07:54.689 "is_configured": true, 00:07:54.689 "data_offset": 2048, 00:07:54.689 "data_size": 63488 00:07:54.689 } 00:07:54.689 ] 00:07:54.689 } 00:07:54.689 } 00:07:54.689 }' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:54.689 BaseBdev2 00:07:54.689 BaseBdev3' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.689 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.690 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.950 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.951 [2024-10-30 09:42:33.348736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.951 [2024-10-30 09:42:33.348839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.951 [2024-10-30 09:42:33.348914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.951 [2024-10-30 09:42:33.348972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.951 [2024-10-30 09:42:33.348983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64784 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64784 ']' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64784 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64784 00:07:54.951 killing process with pid 64784 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64784' 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64784 00:07:54.951 [2024-10-30 09:42:33.378816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.951 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64784 00:07:54.951 [2024-10-30 09:42:33.565517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.892 09:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.892 00:07:55.892 real 0m7.633s 00:07:55.892 user 0m12.154s 00:07:55.892 sys 0m1.221s 00:07:55.892 09:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.892 ************************************ 00:07:55.892 END TEST raid_state_function_test_sb 00:07:55.892 ************************************ 00:07:55.892 09:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.892 09:42:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:07:55.892 09:42:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:55.892 09:42:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.892 09:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.892 ************************************ 00:07:55.892 START TEST raid_superblock_test 00:07:55.892 ************************************ 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:55.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65382 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65382 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65382 ']' 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.892 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.892 [2024-10-30 09:42:34.404266] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:55.892 [2024-10-30 09:42:34.404554] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65382 ] 00:07:56.152 [2024-10-30 09:42:34.561805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.152 [2024-10-30 09:42:34.662262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.434 [2024-10-30 09:42:34.798030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.434 [2024-10-30 09:42:34.798226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.695 malloc1 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.695 [2024-10-30 09:42:35.282910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.695 [2024-10-30 09:42:35.282975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.695 [2024-10-30 09:42:35.282998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.695 [2024-10-30 09:42:35.283009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.695 [2024-10-30 09:42:35.285223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.695 [2024-10-30 09:42:35.285346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.695 pt1 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.695 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 malloc2 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 [2024-10-30 09:42:35.318747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.957 [2024-10-30 09:42:35.318796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.957 [2024-10-30 09:42:35.318819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.957 [2024-10-30 09:42:35.318829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.957 [2024-10-30 09:42:35.320935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.957 [2024-10-30 09:42:35.321052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.957 pt2 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 malloc3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 [2024-10-30 09:42:35.374140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:56.957 [2024-10-30 09:42:35.374188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.957 [2024-10-30 09:42:35.374209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:56.957 [2024-10-30 09:42:35.374219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.957 [2024-10-30 09:42:35.376281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.957 [2024-10-30 09:42:35.376313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:56.957 pt3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 [2024-10-30 09:42:35.382191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.957 [2024-10-30 09:42:35.383993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.957 [2024-10-30 09:42:35.384050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:56.957 [2024-10-30 09:42:35.384218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.957 [2024-10-30 09:42:35.384230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:56.957 [2024-10-30 09:42:35.384481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.957 [2024-10-30 09:42:35.384630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.957 [2024-10-30 09:42:35.384639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.957 [2024-10-30 09:42:35.384784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.957 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.958 "name": "raid_bdev1", 00:07:56.958 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:56.958 "strip_size_kb": 64, 00:07:56.958 "state": "online", 00:07:56.958 "raid_level": "concat", 00:07:56.958 "superblock": true, 00:07:56.958 "num_base_bdevs": 3, 00:07:56.958 "num_base_bdevs_discovered": 3, 00:07:56.958 "num_base_bdevs_operational": 3, 00:07:56.958 "base_bdevs_list": [ 00:07:56.958 { 00:07:56.958 "name": "pt1", 00:07:56.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.958 "is_configured": true, 00:07:56.958 "data_offset": 2048, 00:07:56.958 "data_size": 63488 00:07:56.958 }, 00:07:56.958 { 00:07:56.958 "name": "pt2", 00:07:56.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.958 "is_configured": true, 00:07:56.958 "data_offset": 2048, 00:07:56.958 "data_size": 63488 00:07:56.958 }, 00:07:56.958 { 00:07:56.958 "name": "pt3", 00:07:56.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:56.958 "is_configured": true, 00:07:56.958 "data_offset": 2048, 00:07:56.958 "data_size": 63488 00:07:56.958 } 00:07:56.958 ] 00:07:56.958 }' 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.958 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.220 [2024-10-30 09:42:35.706547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.220 "name": "raid_bdev1", 00:07:57.220 "aliases": [ 00:07:57.220 "371584ae-42f0-46d6-9727-2a4d7ef180d0" 00:07:57.220 ], 00:07:57.220 "product_name": "Raid Volume", 00:07:57.220 "block_size": 512, 00:07:57.220 "num_blocks": 190464, 00:07:57.220 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:57.220 "assigned_rate_limits": { 00:07:57.220 "rw_ios_per_sec": 0, 00:07:57.220 "rw_mbytes_per_sec": 0, 00:07:57.220 "r_mbytes_per_sec": 0, 00:07:57.220 "w_mbytes_per_sec": 0 00:07:57.220 }, 00:07:57.220 "claimed": false, 00:07:57.220 "zoned": false, 00:07:57.220 "supported_io_types": { 00:07:57.220 "read": true, 00:07:57.220 "write": true, 00:07:57.220 "unmap": true, 00:07:57.220 "flush": true, 00:07:57.220 "reset": true, 00:07:57.220 "nvme_admin": false, 00:07:57.220 "nvme_io": false, 00:07:57.220 "nvme_io_md": false, 00:07:57.220 "write_zeroes": true, 00:07:57.220 "zcopy": false, 00:07:57.220 "get_zone_info": false, 00:07:57.220 "zone_management": false, 00:07:57.220 "zone_append": false, 00:07:57.220 "compare": false, 00:07:57.220 "compare_and_write": false, 00:07:57.220 "abort": false, 00:07:57.220 "seek_hole": false, 00:07:57.220 "seek_data": false, 00:07:57.220 "copy": false, 00:07:57.220 "nvme_iov_md": false 00:07:57.220 }, 00:07:57.220 "memory_domains": [ 00:07:57.220 { 00:07:57.220 "dma_device_id": "system", 00:07:57.220 "dma_device_type": 1 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.220 "dma_device_type": 2 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "dma_device_id": "system", 00:07:57.220 "dma_device_type": 1 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.220 "dma_device_type": 2 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "dma_device_id": "system", 00:07:57.220 "dma_device_type": 1 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.220 "dma_device_type": 2 00:07:57.220 } 00:07:57.220 ], 00:07:57.220 "driver_specific": { 00:07:57.220 "raid": { 00:07:57.220 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:57.220 "strip_size_kb": 64, 00:07:57.220 "state": "online", 00:07:57.220 "raid_level": "concat", 00:07:57.220 "superblock": true, 00:07:57.220 "num_base_bdevs": 3, 00:07:57.220 "num_base_bdevs_discovered": 3, 00:07:57.220 "num_base_bdevs_operational": 3, 00:07:57.220 "base_bdevs_list": [ 00:07:57.220 { 00:07:57.220 "name": "pt1", 00:07:57.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.220 "is_configured": true, 00:07:57.220 "data_offset": 2048, 00:07:57.220 "data_size": 63488 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "name": "pt2", 00:07:57.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.220 "is_configured": true, 00:07:57.220 "data_offset": 2048, 00:07:57.220 "data_size": 63488 00:07:57.220 }, 00:07:57.220 { 00:07:57.220 "name": "pt3", 00:07:57.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.220 "is_configured": true, 00:07:57.220 "data_offset": 2048, 00:07:57.220 "data_size": 63488 00:07:57.220 } 00:07:57.220 ] 00:07:57.220 } 00:07:57.220 } 00:07:57.220 }' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.220 pt2 00:07:57.220 pt3' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.220 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 [2024-10-30 09:42:35.902568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=371584ae-42f0-46d6-9727-2a4d7ef180d0 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 371584ae-42f0-46d6-9727-2a4d7ef180d0 ']' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 [2024-10-30 09:42:35.930263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.482 [2024-10-30 09:42:35.930286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.482 [2024-10-30 09:42:35.930352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.482 [2024-10-30 09:42:35.930416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.482 [2024-10-30 09:42:35.930425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 [2024-10-30 09:42:36.030329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.482 [2024-10-30 09:42:36.032251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.482 [2024-10-30 09:42:36.032319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:57.482 [2024-10-30 09:42:36.032428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.482 [2024-10-30 09:42:36.032539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.482 [2024-10-30 09:42:36.032913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:57.482 [2024-10-30 09:42:36.033150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.482 [2024-10-30 09:42:36.033208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.482 request: 00:07:57.482 { 00:07:57.482 "name": "raid_bdev1", 00:07:57.482 "raid_level": "concat", 00:07:57.482 "base_bdevs": [ 00:07:57.482 "malloc1", 00:07:57.482 "malloc2", 00:07:57.482 "malloc3" 00:07:57.482 ], 00:07:57.482 "strip_size_kb": 64, 00:07:57.482 "superblock": false, 00:07:57.482 "method": "bdev_raid_create", 00:07:57.482 "req_id": 1 00:07:57.482 } 00:07:57.482 Got JSON-RPC error response 00:07:57.482 response: 00:07:57.482 { 00:07:57.482 "code": -17, 00:07:57.482 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.482 } 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.482 [2024-10-30 09:42:36.070298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.482 [2024-10-30 09:42:36.070343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.482 [2024-10-30 09:42:36.070361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:57.482 [2024-10-30 09:42:36.070369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.482 [2024-10-30 09:42:36.072521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.482 [2024-10-30 09:42:36.072555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.482 [2024-10-30 09:42:36.072625] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.482 [2024-10-30 09:42:36.072671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.482 pt1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.482 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.483 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.743 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.743 "name": "raid_bdev1", 00:07:57.743 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:57.743 "strip_size_kb": 64, 00:07:57.743 "state": "configuring", 00:07:57.743 "raid_level": "concat", 00:07:57.743 "superblock": true, 00:07:57.743 "num_base_bdevs": 3, 00:07:57.743 "num_base_bdevs_discovered": 1, 00:07:57.743 "num_base_bdevs_operational": 3, 00:07:57.743 "base_bdevs_list": [ 00:07:57.743 { 00:07:57.743 "name": "pt1", 00:07:57.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.743 "is_configured": true, 00:07:57.743 "data_offset": 2048, 00:07:57.743 "data_size": 63488 00:07:57.743 }, 00:07:57.743 { 00:07:57.743 "name": null, 00:07:57.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.743 "is_configured": false, 00:07:57.743 "data_offset": 2048, 00:07:57.743 "data_size": 63488 00:07:57.743 }, 00:07:57.743 { 00:07:57.743 "name": null, 00:07:57.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.743 "is_configured": false, 00:07:57.743 "data_offset": 2048, 00:07:57.743 "data_size": 63488 00:07:57.743 } 00:07:57.743 ] 00:07:57.743 }' 00:07:57.743 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.743 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 [2024-10-30 09:42:36.394396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.004 [2024-10-30 09:42:36.394447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.004 [2024-10-30 09:42:36.394467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:58.004 [2024-10-30 09:42:36.394475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.004 [2024-10-30 09:42:36.394874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.004 [2024-10-30 09:42:36.394894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.004 [2024-10-30 09:42:36.394966] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.004 [2024-10-30 09:42:36.394985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.004 pt2 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 [2024-10-30 09:42:36.402402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.004 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.004 "name": "raid_bdev1", 00:07:58.004 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:58.004 "strip_size_kb": 64, 00:07:58.004 "state": "configuring", 00:07:58.004 "raid_level": "concat", 00:07:58.004 "superblock": true, 00:07:58.004 "num_base_bdevs": 3, 00:07:58.004 "num_base_bdevs_discovered": 1, 00:07:58.004 "num_base_bdevs_operational": 3, 00:07:58.004 "base_bdevs_list": [ 00:07:58.004 { 00:07:58.004 "name": "pt1", 00:07:58.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.004 "is_configured": true, 00:07:58.004 "data_offset": 2048, 00:07:58.004 "data_size": 63488 00:07:58.004 }, 00:07:58.004 { 00:07:58.004 "name": null, 00:07:58.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.004 "is_configured": false, 00:07:58.004 "data_offset": 0, 00:07:58.004 "data_size": 63488 00:07:58.004 }, 00:07:58.004 { 00:07:58.004 "name": null, 00:07:58.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.005 "is_configured": false, 00:07:58.005 "data_offset": 2048, 00:07:58.005 "data_size": 63488 00:07:58.005 } 00:07:58.005 ] 00:07:58.005 }' 00:07:58.005 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.005 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.266 [2024-10-30 09:42:36.730456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.266 [2024-10-30 09:42:36.730512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.266 [2024-10-30 09:42:36.730529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:58.266 [2024-10-30 09:42:36.730539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.266 [2024-10-30 09:42:36.730939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.266 [2024-10-30 09:42:36.730954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.266 [2024-10-30 09:42:36.731019] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.266 [2024-10-30 09:42:36.731040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.266 pt2 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.266 [2024-10-30 09:42:36.738454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:58.266 [2024-10-30 09:42:36.738591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.266 [2024-10-30 09:42:36.738612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:58.266 [2024-10-30 09:42:36.738622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.266 [2024-10-30 09:42:36.738986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.266 [2024-10-30 09:42:36.739011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:58.266 [2024-10-30 09:42:36.739082] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:58.266 [2024-10-30 09:42:36.739103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:58.266 [2024-10-30 09:42:36.739215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.266 [2024-10-30 09:42:36.739226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:58.266 [2024-10-30 09:42:36.739462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:58.266 [2024-10-30 09:42:36.739591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.266 [2024-10-30 09:42:36.739599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.266 [2024-10-30 09:42:36.739719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.266 pt3 00:07:58.266 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.267 "name": "raid_bdev1", 00:07:58.267 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:58.267 "strip_size_kb": 64, 00:07:58.267 "state": "online", 00:07:58.267 "raid_level": "concat", 00:07:58.267 "superblock": true, 00:07:58.267 "num_base_bdevs": 3, 00:07:58.267 "num_base_bdevs_discovered": 3, 00:07:58.267 "num_base_bdevs_operational": 3, 00:07:58.267 "base_bdevs_list": [ 00:07:58.267 { 00:07:58.267 "name": "pt1", 00:07:58.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.267 "is_configured": true, 00:07:58.267 "data_offset": 2048, 00:07:58.267 "data_size": 63488 00:07:58.267 }, 00:07:58.267 { 00:07:58.267 "name": "pt2", 00:07:58.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.267 "is_configured": true, 00:07:58.267 "data_offset": 2048, 00:07:58.267 "data_size": 63488 00:07:58.267 }, 00:07:58.267 { 00:07:58.267 "name": "pt3", 00:07:58.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.267 "is_configured": true, 00:07:58.267 "data_offset": 2048, 00:07:58.267 "data_size": 63488 00:07:58.267 } 00:07:58.267 ] 00:07:58.267 }' 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.267 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 [2024-10-30 09:42:37.058879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.527 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.527 "name": "raid_bdev1", 00:07:58.527 "aliases": [ 00:07:58.528 "371584ae-42f0-46d6-9727-2a4d7ef180d0" 00:07:58.528 ], 00:07:58.528 "product_name": "Raid Volume", 00:07:58.528 "block_size": 512, 00:07:58.528 "num_blocks": 190464, 00:07:58.528 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:58.528 "assigned_rate_limits": { 00:07:58.528 "rw_ios_per_sec": 0, 00:07:58.528 "rw_mbytes_per_sec": 0, 00:07:58.528 "r_mbytes_per_sec": 0, 00:07:58.528 "w_mbytes_per_sec": 0 00:07:58.528 }, 00:07:58.528 "claimed": false, 00:07:58.528 "zoned": false, 00:07:58.528 "supported_io_types": { 00:07:58.528 "read": true, 00:07:58.528 "write": true, 00:07:58.528 "unmap": true, 00:07:58.528 "flush": true, 00:07:58.528 "reset": true, 00:07:58.528 "nvme_admin": false, 00:07:58.528 "nvme_io": false, 00:07:58.528 "nvme_io_md": false, 00:07:58.528 "write_zeroes": true, 00:07:58.528 "zcopy": false, 00:07:58.528 "get_zone_info": false, 00:07:58.528 "zone_management": false, 00:07:58.528 "zone_append": false, 00:07:58.528 "compare": false, 00:07:58.528 "compare_and_write": false, 00:07:58.528 "abort": false, 00:07:58.528 "seek_hole": false, 00:07:58.528 "seek_data": false, 00:07:58.528 "copy": false, 00:07:58.528 "nvme_iov_md": false 00:07:58.528 }, 00:07:58.528 "memory_domains": [ 00:07:58.528 { 00:07:58.528 "dma_device_id": "system", 00:07:58.528 "dma_device_type": 1 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.528 "dma_device_type": 2 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "dma_device_id": "system", 00:07:58.528 "dma_device_type": 1 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.528 "dma_device_type": 2 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "dma_device_id": "system", 00:07:58.528 "dma_device_type": 1 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.528 "dma_device_type": 2 00:07:58.528 } 00:07:58.528 ], 00:07:58.528 "driver_specific": { 00:07:58.528 "raid": { 00:07:58.528 "uuid": "371584ae-42f0-46d6-9727-2a4d7ef180d0", 00:07:58.528 "strip_size_kb": 64, 00:07:58.528 "state": "online", 00:07:58.528 "raid_level": "concat", 00:07:58.528 "superblock": true, 00:07:58.528 "num_base_bdevs": 3, 00:07:58.528 "num_base_bdevs_discovered": 3, 00:07:58.528 "num_base_bdevs_operational": 3, 00:07:58.528 "base_bdevs_list": [ 00:07:58.528 { 00:07:58.528 "name": "pt1", 00:07:58.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.528 "is_configured": true, 00:07:58.528 "data_offset": 2048, 00:07:58.528 "data_size": 63488 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "name": "pt2", 00:07:58.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.528 "is_configured": true, 00:07:58.528 "data_offset": 2048, 00:07:58.528 "data_size": 63488 00:07:58.528 }, 00:07:58.528 { 00:07:58.528 "name": "pt3", 00:07:58.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.528 "is_configured": true, 00:07:58.528 "data_offset": 2048, 00:07:58.528 "data_size": 63488 00:07:58.528 } 00:07:58.528 ] 00:07:58.528 } 00:07:58.528 } 00:07:58.528 }' 00:07:58.528 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.528 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.528 pt2 00:07:58.528 pt3' 00:07:58.528 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.528 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.528 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.789 [2024-10-30 09:42:37.246876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 371584ae-42f0-46d6-9727-2a4d7ef180d0 '!=' 371584ae-42f0-46d6-9727-2a4d7ef180d0 ']' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65382 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65382 ']' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65382 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65382 00:07:58.789 killing process with pid 65382 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65382' 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65382 00:07:58.789 [2024-10-30 09:42:37.300497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.789 09:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65382 00:07:58.789 [2024-10-30 09:42:37.300578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.789 [2024-10-30 09:42:37.300635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.789 [2024-10-30 09:42:37.300646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:59.049 [2024-10-30 09:42:37.485218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.619 09:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.619 00:07:59.619 real 0m3.852s 00:07:59.619 user 0m5.536s 00:07:59.619 sys 0m0.594s 00:07:59.619 ************************************ 00:07:59.619 END TEST raid_superblock_test 00:07:59.619 ************************************ 00:07:59.619 09:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.619 09:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.881 09:42:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:07:59.881 09:42:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:59.881 09:42:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.881 09:42:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.881 ************************************ 00:07:59.881 START TEST raid_read_error_test 00:07:59.881 ************************************ 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hhkzfkhYef 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65613 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65613 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65613 ']' 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.881 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.881 [2024-10-30 09:42:38.333502] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:07:59.882 [2024-10-30 09:42:38.333620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65613 ] 00:07:59.882 [2024-10-30 09:42:38.493490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.142 [2024-10-30 09:42:38.595028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.142 [2024-10-30 09:42:38.730354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.142 [2024-10-30 09:42:38.730400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 BaseBdev1_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 true 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 [2024-10-30 09:42:39.222875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.715 [2024-10-30 09:42:39.222928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.715 [2024-10-30 09:42:39.222948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.715 [2024-10-30 09:42:39.222960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.715 [2024-10-30 09:42:39.225107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.715 [2024-10-30 09:42:39.225142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.715 BaseBdev1 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 BaseBdev2_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 true 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 [2024-10-30 09:42:39.266676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.715 [2024-10-30 09:42:39.266725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.715 [2024-10-30 09:42:39.266742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.715 [2024-10-30 09:42:39.266754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.715 [2024-10-30 09:42:39.268874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.715 [2024-10-30 09:42:39.269013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.715 BaseBdev2 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 BaseBdev3_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 true 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:00.715 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.716 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.716 [2024-10-30 09:42:39.332406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:00.716 [2024-10-30 09:42:39.332461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.716 [2024-10-30 09:42:39.332480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:00.716 [2024-10-30 09:42:39.332492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.976 [2024-10-30 09:42:39.334656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.976 [2024-10-30 09:42:39.334692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:00.976 BaseBdev3 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.976 [2024-10-30 09:42:39.340473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.976 [2024-10-30 09:42:39.342425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.976 [2024-10-30 09:42:39.342502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.976 [2024-10-30 09:42:39.342695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:00.976 [2024-10-30 09:42:39.342705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:00.976 [2024-10-30 09:42:39.342952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:00.976 [2024-10-30 09:42:39.343103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:00.976 [2024-10-30 09:42:39.343116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:00.976 [2024-10-30 09:42:39.343253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.976 "name": "raid_bdev1", 00:08:00.976 "uuid": "aff0de11-883c-488f-9617-a64d6492636a", 00:08:00.976 "strip_size_kb": 64, 00:08:00.976 "state": "online", 00:08:00.976 "raid_level": "concat", 00:08:00.976 "superblock": true, 00:08:00.976 "num_base_bdevs": 3, 00:08:00.976 "num_base_bdevs_discovered": 3, 00:08:00.976 "num_base_bdevs_operational": 3, 00:08:00.976 "base_bdevs_list": [ 00:08:00.976 { 00:08:00.976 "name": "BaseBdev1", 00:08:00.976 "uuid": "2616ebd9-5af3-54ef-b210-2c615c9eab25", 00:08:00.976 "is_configured": true, 00:08:00.976 "data_offset": 2048, 00:08:00.976 "data_size": 63488 00:08:00.976 }, 00:08:00.976 { 00:08:00.976 "name": "BaseBdev2", 00:08:00.976 "uuid": "6e7dd7a7-acad-5f40-93ca-280fcf263b8d", 00:08:00.976 "is_configured": true, 00:08:00.976 "data_offset": 2048, 00:08:00.976 "data_size": 63488 00:08:00.976 }, 00:08:00.976 { 00:08:00.976 "name": "BaseBdev3", 00:08:00.976 "uuid": "fc611a39-d8aa-5b54-8e05-07663c055670", 00:08:00.976 "is_configured": true, 00:08:00.976 "data_offset": 2048, 00:08:00.976 "data_size": 63488 00:08:00.976 } 00:08:00.976 ] 00:08:00.976 }' 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.976 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.237 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.237 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.237 [2024-10-30 09:42:39.725539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.179 "name": "raid_bdev1", 00:08:02.179 "uuid": "aff0de11-883c-488f-9617-a64d6492636a", 00:08:02.179 "strip_size_kb": 64, 00:08:02.179 "state": "online", 00:08:02.179 "raid_level": "concat", 00:08:02.179 "superblock": true, 00:08:02.179 "num_base_bdevs": 3, 00:08:02.179 "num_base_bdevs_discovered": 3, 00:08:02.179 "num_base_bdevs_operational": 3, 00:08:02.179 "base_bdevs_list": [ 00:08:02.179 { 00:08:02.179 "name": "BaseBdev1", 00:08:02.179 "uuid": "2616ebd9-5af3-54ef-b210-2c615c9eab25", 00:08:02.179 "is_configured": true, 00:08:02.179 "data_offset": 2048, 00:08:02.179 "data_size": 63488 00:08:02.179 }, 00:08:02.179 { 00:08:02.179 "name": "BaseBdev2", 00:08:02.179 "uuid": "6e7dd7a7-acad-5f40-93ca-280fcf263b8d", 00:08:02.179 "is_configured": true, 00:08:02.179 "data_offset": 2048, 00:08:02.179 "data_size": 63488 00:08:02.179 }, 00:08:02.179 { 00:08:02.179 "name": "BaseBdev3", 00:08:02.179 "uuid": "fc611a39-d8aa-5b54-8e05-07663c055670", 00:08:02.179 "is_configured": true, 00:08:02.179 "data_offset": 2048, 00:08:02.179 "data_size": 63488 00:08:02.179 } 00:08:02.179 ] 00:08:02.179 }' 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.179 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.440 [2024-10-30 09:42:40.971462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.440 [2024-10-30 09:42:40.971596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.440 [2024-10-30 09:42:40.974631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.440 [2024-10-30 09:42:40.974766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.440 [2024-10-30 09:42:40.974812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.440 [2024-10-30 09:42:40.974824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:02.440 { 00:08:02.440 "results": [ 00:08:02.440 { 00:08:02.440 "job": "raid_bdev1", 00:08:02.440 "core_mask": "0x1", 00:08:02.440 "workload": "randrw", 00:08:02.440 "percentage": 50, 00:08:02.440 "status": "finished", 00:08:02.440 "queue_depth": 1, 00:08:02.440 "io_size": 131072, 00:08:02.440 "runtime": 1.24418, 00:08:02.440 "iops": 15034.80203829028, 00:08:02.440 "mibps": 1879.350254786285, 00:08:02.440 "io_failed": 1, 00:08:02.440 "io_timeout": 0, 00:08:02.440 "avg_latency_us": 91.00359931082976, 00:08:02.440 "min_latency_us": 33.08307692307692, 00:08:02.440 "max_latency_us": 1701.4153846153847 00:08:02.440 } 00:08:02.440 ], 00:08:02.440 "core_count": 1 00:08:02.440 } 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65613 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65613 ']' 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65613 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.440 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65613 00:08:02.440 killing process with pid 65613 00:08:02.440 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:02.440 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:02.440 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65613' 00:08:02.440 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65613 00:08:02.440 [2024-10-30 09:42:41.004113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.440 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65613 00:08:02.699 [2024-10-30 09:42:41.147012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.268 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hhkzfkhYef 00:08:03.268 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.268 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.529 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:08:03.529 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:03.529 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.529 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.529 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:08:03.529 00:08:03.529 real 0m3.630s 00:08:03.529 user 0m4.314s 00:08:03.529 sys 0m0.365s 00:08:03.529 ************************************ 00:08:03.529 END TEST raid_read_error_test 00:08:03.530 ************************************ 00:08:03.530 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.530 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.530 09:42:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:03.530 09:42:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:03.530 09:42:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.530 09:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.530 ************************************ 00:08:03.530 START TEST raid_write_error_test 00:08:03.530 ************************************ 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C8YMJR3uyF 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65753 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65753 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65753 ']' 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.530 09:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.530 [2024-10-30 09:42:42.032590] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:03.530 [2024-10-30 09:42:42.032718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65753 ] 00:08:03.791 [2024-10-30 09:42:42.191742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.791 [2024-10-30 09:42:42.293211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.051 [2024-10-30 09:42:42.428822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.051 [2024-10-30 09:42:42.428850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.309 BaseBdev1_malloc 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.309 true 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.309 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.309 [2024-10-30 09:42:42.914758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.309 [2024-10-30 09:42:42.914811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.309 [2024-10-30 09:42:42.914831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.309 [2024-10-30 09:42:42.914844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.309 [2024-10-30 09:42:42.916998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.310 [2024-10-30 09:42:42.917035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.310 BaseBdev1 00:08:04.310 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.310 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.310 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.310 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.310 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.569 BaseBdev2_malloc 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.569 true 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.569 [2024-10-30 09:42:42.958673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.569 [2024-10-30 09:42:42.958824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.569 [2024-10-30 09:42:42.958845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.569 [2024-10-30 09:42:42.958856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.569 [2024-10-30 09:42:42.960950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.569 [2024-10-30 09:42:42.960984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.569 BaseBdev2 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.569 BaseBdev3_malloc 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:04.569 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.570 09:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.570 true 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.570 [2024-10-30 09:42:43.010516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:04.570 [2024-10-30 09:42:43.010672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.570 [2024-10-30 09:42:43.010695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:04.570 [2024-10-30 09:42:43.010706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.570 [2024-10-30 09:42:43.012849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.570 [2024-10-30 09:42:43.012884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:04.570 BaseBdev3 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.570 [2024-10-30 09:42:43.018594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.570 [2024-10-30 09:42:43.020426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.570 [2024-10-30 09:42:43.020504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:04.570 [2024-10-30 09:42:43.020701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:04.570 [2024-10-30 09:42:43.020720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:04.570 [2024-10-30 09:42:43.020969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:04.570 [2024-10-30 09:42:43.021125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:04.570 [2024-10-30 09:42:43.021138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:04.570 [2024-10-30 09:42:43.021278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.570 "name": "raid_bdev1", 00:08:04.570 "uuid": "d2f8bc32-30b9-4013-9c58-303ed7a1c5b8", 00:08:04.570 "strip_size_kb": 64, 00:08:04.570 "state": "online", 00:08:04.570 "raid_level": "concat", 00:08:04.570 "superblock": true, 00:08:04.570 "num_base_bdevs": 3, 00:08:04.570 "num_base_bdevs_discovered": 3, 00:08:04.570 "num_base_bdevs_operational": 3, 00:08:04.570 "base_bdevs_list": [ 00:08:04.570 { 00:08:04.570 "name": "BaseBdev1", 00:08:04.570 "uuid": "353f4856-32e6-57b5-b2ea-27e3b22830ba", 00:08:04.570 "is_configured": true, 00:08:04.570 "data_offset": 2048, 00:08:04.570 "data_size": 63488 00:08:04.570 }, 00:08:04.570 { 00:08:04.570 "name": "BaseBdev2", 00:08:04.570 "uuid": "e14fc875-205b-5064-ae3e-cebb32d79f91", 00:08:04.570 "is_configured": true, 00:08:04.570 "data_offset": 2048, 00:08:04.570 "data_size": 63488 00:08:04.570 }, 00:08:04.570 { 00:08:04.570 "name": "BaseBdev3", 00:08:04.570 "uuid": "9b444152-1746-5a5e-a42c-fc20838ef1c5", 00:08:04.570 "is_configured": true, 00:08:04.570 "data_offset": 2048, 00:08:04.570 "data_size": 63488 00:08:04.570 } 00:08:04.570 ] 00:08:04.570 }' 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.570 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.830 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:04.830 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:04.830 [2024-10-30 09:42:43.431614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.858 "name": "raid_bdev1", 00:08:05.858 "uuid": "d2f8bc32-30b9-4013-9c58-303ed7a1c5b8", 00:08:05.858 "strip_size_kb": 64, 00:08:05.858 "state": "online", 00:08:05.858 "raid_level": "concat", 00:08:05.858 "superblock": true, 00:08:05.858 "num_base_bdevs": 3, 00:08:05.858 "num_base_bdevs_discovered": 3, 00:08:05.858 "num_base_bdevs_operational": 3, 00:08:05.858 "base_bdevs_list": [ 00:08:05.858 { 00:08:05.858 "name": "BaseBdev1", 00:08:05.858 "uuid": "353f4856-32e6-57b5-b2ea-27e3b22830ba", 00:08:05.858 "is_configured": true, 00:08:05.858 "data_offset": 2048, 00:08:05.858 "data_size": 63488 00:08:05.858 }, 00:08:05.858 { 00:08:05.858 "name": "BaseBdev2", 00:08:05.858 "uuid": "e14fc875-205b-5064-ae3e-cebb32d79f91", 00:08:05.858 "is_configured": true, 00:08:05.858 "data_offset": 2048, 00:08:05.858 "data_size": 63488 00:08:05.858 }, 00:08:05.858 { 00:08:05.858 "name": "BaseBdev3", 00:08:05.858 "uuid": "9b444152-1746-5a5e-a42c-fc20838ef1c5", 00:08:05.858 "is_configured": true, 00:08:05.858 "data_offset": 2048, 00:08:05.858 "data_size": 63488 00:08:05.858 } 00:08:05.858 ] 00:08:05.858 }' 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.858 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.119 [2024-10-30 09:42:44.669257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.119 [2024-10-30 09:42:44.669284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.119 { 00:08:06.119 "results": [ 00:08:06.119 { 00:08:06.119 "job": "raid_bdev1", 00:08:06.119 "core_mask": "0x1", 00:08:06.119 "workload": "randrw", 00:08:06.119 "percentage": 50, 00:08:06.119 "status": "finished", 00:08:06.119 "queue_depth": 1, 00:08:06.119 "io_size": 131072, 00:08:06.119 "runtime": 1.235772, 00:08:06.119 "iops": 14963.92538429419, 00:08:06.119 "mibps": 1870.4906730367738, 00:08:06.119 "io_failed": 1, 00:08:06.119 "io_timeout": 0, 00:08:06.119 "avg_latency_us": 91.43460685748039, 00:08:06.119 "min_latency_us": 33.28, 00:08:06.119 "max_latency_us": 1676.2092307692308 00:08:06.119 } 00:08:06.119 ], 00:08:06.119 "core_count": 1 00:08:06.119 } 00:08:06.119 [2024-10-30 09:42:44.672289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.119 [2024-10-30 09:42:44.672335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.119 [2024-10-30 09:42:44.672373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.119 [2024-10-30 09:42:44.672384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65753 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65753 ']' 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65753 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65753 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:06.119 killing process with pid 65753 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65753' 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65753 00:08:06.119 [2024-10-30 09:42:44.701811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.119 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65753 00:08:06.380 [2024-10-30 09:42:44.842367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C8YMJR3uyF 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:08:07.323 00:08:07.323 real 0m3.640s 00:08:07.323 user 0m4.306s 00:08:07.323 sys 0m0.404s 00:08:07.323 ************************************ 00:08:07.323 END TEST raid_write_error_test 00:08:07.323 ************************************ 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.323 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.323 09:42:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:07.323 09:42:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:07.323 09:42:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:07.323 09:42:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.323 09:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.323 ************************************ 00:08:07.323 START TEST raid_state_function_test 00:08:07.323 ************************************ 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.323 Process raid pid: 65887 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65887 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65887' 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65887 00:08:07.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65887 ']' 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.323 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.324 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.324 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.324 09:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.324 09:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.324 [2024-10-30 09:42:45.737295] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:07.324 [2024-10-30 09:42:45.737413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.324 [2024-10-30 09:42:45.897262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.584 [2024-10-30 09:42:46.041404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.584 [2024-10-30 09:42:46.179110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.584 [2024-10-30 09:42:46.179310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.156 [2024-10-30 09:42:46.597102] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.156 [2024-10-30 09:42:46.597152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.156 [2024-10-30 09:42:46.597162] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.156 [2024-10-30 09:42:46.597173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.156 [2024-10-30 09:42:46.597180] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.156 [2024-10-30 09:42:46.597189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.156 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.156 "name": "Existed_Raid", 00:08:08.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.156 "strip_size_kb": 0, 00:08:08.156 "state": "configuring", 00:08:08.156 "raid_level": "raid1", 00:08:08.156 "superblock": false, 00:08:08.156 "num_base_bdevs": 3, 00:08:08.156 "num_base_bdevs_discovered": 0, 00:08:08.157 "num_base_bdevs_operational": 3, 00:08:08.157 "base_bdevs_list": [ 00:08:08.157 { 00:08:08.157 "name": "BaseBdev1", 00:08:08.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.157 "is_configured": false, 00:08:08.157 "data_offset": 0, 00:08:08.157 "data_size": 0 00:08:08.157 }, 00:08:08.157 { 00:08:08.157 "name": "BaseBdev2", 00:08:08.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.157 "is_configured": false, 00:08:08.157 "data_offset": 0, 00:08:08.157 "data_size": 0 00:08:08.157 }, 00:08:08.157 { 00:08:08.157 "name": "BaseBdev3", 00:08:08.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.157 "is_configured": false, 00:08:08.157 "data_offset": 0, 00:08:08.157 "data_size": 0 00:08:08.157 } 00:08:08.157 ] 00:08:08.157 }' 00:08:08.157 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.157 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 [2024-10-30 09:42:46.925130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.418 [2024-10-30 09:42:46.925171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 [2024-10-30 09:42:46.933124] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.418 [2024-10-30 09:42:46.933162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.418 [2024-10-30 09:42:46.933170] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.418 [2024-10-30 09:42:46.933179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.418 [2024-10-30 09:42:46.933185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.418 [2024-10-30 09:42:46.933193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 [2024-10-30 09:42:46.965532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.418 BaseBdev1 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.418 [ 00:08:08.418 { 00:08:08.418 "name": "BaseBdev1", 00:08:08.418 "aliases": [ 00:08:08.418 "7affa936-25e8-4343-9bbb-0c6b096e9a7a" 00:08:08.418 ], 00:08:08.418 "product_name": "Malloc disk", 00:08:08.418 "block_size": 512, 00:08:08.418 "num_blocks": 65536, 00:08:08.418 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:08.418 "assigned_rate_limits": { 00:08:08.418 "rw_ios_per_sec": 0, 00:08:08.418 "rw_mbytes_per_sec": 0, 00:08:08.418 "r_mbytes_per_sec": 0, 00:08:08.418 "w_mbytes_per_sec": 0 00:08:08.418 }, 00:08:08.418 "claimed": true, 00:08:08.418 "claim_type": "exclusive_write", 00:08:08.418 "zoned": false, 00:08:08.418 "supported_io_types": { 00:08:08.418 "read": true, 00:08:08.418 "write": true, 00:08:08.418 "unmap": true, 00:08:08.418 "flush": true, 00:08:08.418 "reset": true, 00:08:08.418 "nvme_admin": false, 00:08:08.418 "nvme_io": false, 00:08:08.418 "nvme_io_md": false, 00:08:08.418 "write_zeroes": true, 00:08:08.418 "zcopy": true, 00:08:08.418 "get_zone_info": false, 00:08:08.418 "zone_management": false, 00:08:08.418 "zone_append": false, 00:08:08.418 "compare": false, 00:08:08.418 "compare_and_write": false, 00:08:08.418 "abort": true, 00:08:08.418 "seek_hole": false, 00:08:08.418 "seek_data": false, 00:08:08.418 "copy": true, 00:08:08.418 "nvme_iov_md": false 00:08:08.418 }, 00:08:08.418 "memory_domains": [ 00:08:08.418 { 00:08:08.418 "dma_device_id": "system", 00:08:08.418 "dma_device_type": 1 00:08:08.418 }, 00:08:08.418 { 00:08:08.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.418 "dma_device_type": 2 00:08:08.418 } 00:08:08.418 ], 00:08:08.418 "driver_specific": {} 00:08:08.418 } 00:08:08.418 ] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:08.418 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.419 09:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.419 "name": "Existed_Raid", 00:08:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.419 "strip_size_kb": 0, 00:08:08.419 "state": "configuring", 00:08:08.419 "raid_level": "raid1", 00:08:08.419 "superblock": false, 00:08:08.419 "num_base_bdevs": 3, 00:08:08.419 "num_base_bdevs_discovered": 1, 00:08:08.419 "num_base_bdevs_operational": 3, 00:08:08.419 "base_bdevs_list": [ 00:08:08.419 { 00:08:08.419 "name": "BaseBdev1", 00:08:08.419 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:08.419 "is_configured": true, 00:08:08.419 "data_offset": 0, 00:08:08.419 "data_size": 65536 00:08:08.419 }, 00:08:08.419 { 00:08:08.419 "name": "BaseBdev2", 00:08:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.419 "is_configured": false, 00:08:08.419 "data_offset": 0, 00:08:08.419 "data_size": 0 00:08:08.419 }, 00:08:08.419 { 00:08:08.419 "name": "BaseBdev3", 00:08:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.419 "is_configured": false, 00:08:08.419 "data_offset": 0, 00:08:08.419 "data_size": 0 00:08:08.419 } 00:08:08.419 ] 00:08:08.419 }' 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.419 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.008 [2024-10-30 09:42:47.313659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.008 [2024-10-30 09:42:47.313816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.008 [2024-10-30 09:42:47.321716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.008 [2024-10-30 09:42:47.323587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.008 [2024-10-30 09:42:47.323627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.008 [2024-10-30 09:42:47.323637] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.008 [2024-10-30 09:42:47.323647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.008 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.009 "name": "Existed_Raid", 00:08:09.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.009 "strip_size_kb": 0, 00:08:09.009 "state": "configuring", 00:08:09.009 "raid_level": "raid1", 00:08:09.009 "superblock": false, 00:08:09.009 "num_base_bdevs": 3, 00:08:09.009 "num_base_bdevs_discovered": 1, 00:08:09.009 "num_base_bdevs_operational": 3, 00:08:09.009 "base_bdevs_list": [ 00:08:09.009 { 00:08:09.009 "name": "BaseBdev1", 00:08:09.009 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:09.009 "is_configured": true, 00:08:09.009 "data_offset": 0, 00:08:09.009 "data_size": 65536 00:08:09.009 }, 00:08:09.009 { 00:08:09.009 "name": "BaseBdev2", 00:08:09.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.009 "is_configured": false, 00:08:09.009 "data_offset": 0, 00:08:09.009 "data_size": 0 00:08:09.009 }, 00:08:09.009 { 00:08:09.009 "name": "BaseBdev3", 00:08:09.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.009 "is_configured": false, 00:08:09.009 "data_offset": 0, 00:08:09.009 "data_size": 0 00:08:09.009 } 00:08:09.009 ] 00:08:09.009 }' 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.009 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.281 [2024-10-30 09:42:47.656242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.281 BaseBdev2 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.281 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.281 [ 00:08:09.281 { 00:08:09.281 "name": "BaseBdev2", 00:08:09.281 "aliases": [ 00:08:09.281 "9229bad3-792f-44c2-82cc-71d057f34e76" 00:08:09.281 ], 00:08:09.281 "product_name": "Malloc disk", 00:08:09.281 "block_size": 512, 00:08:09.281 "num_blocks": 65536, 00:08:09.281 "uuid": "9229bad3-792f-44c2-82cc-71d057f34e76", 00:08:09.281 "assigned_rate_limits": { 00:08:09.281 "rw_ios_per_sec": 0, 00:08:09.281 "rw_mbytes_per_sec": 0, 00:08:09.281 "r_mbytes_per_sec": 0, 00:08:09.281 "w_mbytes_per_sec": 0 00:08:09.281 }, 00:08:09.281 "claimed": true, 00:08:09.281 "claim_type": "exclusive_write", 00:08:09.281 "zoned": false, 00:08:09.281 "supported_io_types": { 00:08:09.281 "read": true, 00:08:09.281 "write": true, 00:08:09.281 "unmap": true, 00:08:09.281 "flush": true, 00:08:09.281 "reset": true, 00:08:09.281 "nvme_admin": false, 00:08:09.281 "nvme_io": false, 00:08:09.281 "nvme_io_md": false, 00:08:09.281 "write_zeroes": true, 00:08:09.281 "zcopy": true, 00:08:09.281 "get_zone_info": false, 00:08:09.281 "zone_management": false, 00:08:09.282 "zone_append": false, 00:08:09.282 "compare": false, 00:08:09.282 "compare_and_write": false, 00:08:09.282 "abort": true, 00:08:09.282 "seek_hole": false, 00:08:09.282 "seek_data": false, 00:08:09.282 "copy": true, 00:08:09.282 "nvme_iov_md": false 00:08:09.282 }, 00:08:09.282 "memory_domains": [ 00:08:09.282 { 00:08:09.282 "dma_device_id": "system", 00:08:09.282 "dma_device_type": 1 00:08:09.282 }, 00:08:09.282 { 00:08:09.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.282 "dma_device_type": 2 00:08:09.282 } 00:08:09.282 ], 00:08:09.282 "driver_specific": {} 00:08:09.282 } 00:08:09.282 ] 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.282 "name": "Existed_Raid", 00:08:09.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.282 "strip_size_kb": 0, 00:08:09.282 "state": "configuring", 00:08:09.282 "raid_level": "raid1", 00:08:09.282 "superblock": false, 00:08:09.282 "num_base_bdevs": 3, 00:08:09.282 "num_base_bdevs_discovered": 2, 00:08:09.282 "num_base_bdevs_operational": 3, 00:08:09.282 "base_bdevs_list": [ 00:08:09.282 { 00:08:09.282 "name": "BaseBdev1", 00:08:09.282 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:09.282 "is_configured": true, 00:08:09.282 "data_offset": 0, 00:08:09.282 "data_size": 65536 00:08:09.282 }, 00:08:09.282 { 00:08:09.282 "name": "BaseBdev2", 00:08:09.282 "uuid": "9229bad3-792f-44c2-82cc-71d057f34e76", 00:08:09.282 "is_configured": true, 00:08:09.282 "data_offset": 0, 00:08:09.282 "data_size": 65536 00:08:09.282 }, 00:08:09.282 { 00:08:09.282 "name": "BaseBdev3", 00:08:09.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.282 "is_configured": false, 00:08:09.282 "data_offset": 0, 00:08:09.282 "data_size": 0 00:08:09.282 } 00:08:09.282 ] 00:08:09.282 }' 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.282 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.543 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.543 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 [2024-10-30 09:42:48.022465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.543 [2024-10-30 09:42:48.022509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.543 [2024-10-30 09:42:48.022523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:09.543 [2024-10-30 09:42:48.022876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:09.543 [2024-10-30 09:42:48.023019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.543 [2024-10-30 09:42:48.023028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.543 [2024-10-30 09:42:48.023298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.543 BaseBdev3 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 [ 00:08:09.543 { 00:08:09.543 "name": "BaseBdev3", 00:08:09.543 "aliases": [ 00:08:09.543 "f7c3df05-1f5b-47a2-9d17-601d79d831eb" 00:08:09.543 ], 00:08:09.543 "product_name": "Malloc disk", 00:08:09.543 "block_size": 512, 00:08:09.543 "num_blocks": 65536, 00:08:09.543 "uuid": "f7c3df05-1f5b-47a2-9d17-601d79d831eb", 00:08:09.543 "assigned_rate_limits": { 00:08:09.543 "rw_ios_per_sec": 0, 00:08:09.543 "rw_mbytes_per_sec": 0, 00:08:09.543 "r_mbytes_per_sec": 0, 00:08:09.543 "w_mbytes_per_sec": 0 00:08:09.543 }, 00:08:09.543 "claimed": true, 00:08:09.543 "claim_type": "exclusive_write", 00:08:09.543 "zoned": false, 00:08:09.543 "supported_io_types": { 00:08:09.543 "read": true, 00:08:09.543 "write": true, 00:08:09.543 "unmap": true, 00:08:09.543 "flush": true, 00:08:09.543 "reset": true, 00:08:09.543 "nvme_admin": false, 00:08:09.543 "nvme_io": false, 00:08:09.543 "nvme_io_md": false, 00:08:09.543 "write_zeroes": true, 00:08:09.543 "zcopy": true, 00:08:09.543 "get_zone_info": false, 00:08:09.543 "zone_management": false, 00:08:09.543 "zone_append": false, 00:08:09.543 "compare": false, 00:08:09.543 "compare_and_write": false, 00:08:09.543 "abort": true, 00:08:09.543 "seek_hole": false, 00:08:09.543 "seek_data": false, 00:08:09.543 "copy": true, 00:08:09.543 "nvme_iov_md": false 00:08:09.543 }, 00:08:09.543 "memory_domains": [ 00:08:09.543 { 00:08:09.543 "dma_device_id": "system", 00:08:09.543 "dma_device_type": 1 00:08:09.543 }, 00:08:09.543 { 00:08:09.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.543 "dma_device_type": 2 00:08:09.543 } 00:08:09.543 ], 00:08:09.543 "driver_specific": {} 00:08:09.543 } 00:08:09.543 ] 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.543 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.544 "name": "Existed_Raid", 00:08:09.544 "uuid": "25a10f99-3a17-428c-9326-82509cfc08a3", 00:08:09.544 "strip_size_kb": 0, 00:08:09.544 "state": "online", 00:08:09.544 "raid_level": "raid1", 00:08:09.544 "superblock": false, 00:08:09.544 "num_base_bdevs": 3, 00:08:09.544 "num_base_bdevs_discovered": 3, 00:08:09.544 "num_base_bdevs_operational": 3, 00:08:09.544 "base_bdevs_list": [ 00:08:09.544 { 00:08:09.544 "name": "BaseBdev1", 00:08:09.544 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:09.544 "is_configured": true, 00:08:09.544 "data_offset": 0, 00:08:09.544 "data_size": 65536 00:08:09.544 }, 00:08:09.544 { 00:08:09.544 "name": "BaseBdev2", 00:08:09.544 "uuid": "9229bad3-792f-44c2-82cc-71d057f34e76", 00:08:09.544 "is_configured": true, 00:08:09.544 "data_offset": 0, 00:08:09.544 "data_size": 65536 00:08:09.544 }, 00:08:09.544 { 00:08:09.544 "name": "BaseBdev3", 00:08:09.544 "uuid": "f7c3df05-1f5b-47a2-9d17-601d79d831eb", 00:08:09.544 "is_configured": true, 00:08:09.544 "data_offset": 0, 00:08:09.544 "data_size": 65536 00:08:09.544 } 00:08:09.544 ] 00:08:09.544 }' 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.544 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.806 [2024-10-30 09:42:48.366950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.806 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.806 "name": "Existed_Raid", 00:08:09.806 "aliases": [ 00:08:09.806 "25a10f99-3a17-428c-9326-82509cfc08a3" 00:08:09.806 ], 00:08:09.806 "product_name": "Raid Volume", 00:08:09.806 "block_size": 512, 00:08:09.806 "num_blocks": 65536, 00:08:09.806 "uuid": "25a10f99-3a17-428c-9326-82509cfc08a3", 00:08:09.806 "assigned_rate_limits": { 00:08:09.806 "rw_ios_per_sec": 0, 00:08:09.806 "rw_mbytes_per_sec": 0, 00:08:09.806 "r_mbytes_per_sec": 0, 00:08:09.806 "w_mbytes_per_sec": 0 00:08:09.806 }, 00:08:09.806 "claimed": false, 00:08:09.806 "zoned": false, 00:08:09.806 "supported_io_types": { 00:08:09.806 "read": true, 00:08:09.806 "write": true, 00:08:09.806 "unmap": false, 00:08:09.806 "flush": false, 00:08:09.806 "reset": true, 00:08:09.806 "nvme_admin": false, 00:08:09.806 "nvme_io": false, 00:08:09.806 "nvme_io_md": false, 00:08:09.806 "write_zeroes": true, 00:08:09.806 "zcopy": false, 00:08:09.806 "get_zone_info": false, 00:08:09.806 "zone_management": false, 00:08:09.806 "zone_append": false, 00:08:09.806 "compare": false, 00:08:09.806 "compare_and_write": false, 00:08:09.806 "abort": false, 00:08:09.806 "seek_hole": false, 00:08:09.806 "seek_data": false, 00:08:09.806 "copy": false, 00:08:09.806 "nvme_iov_md": false 00:08:09.806 }, 00:08:09.806 "memory_domains": [ 00:08:09.806 { 00:08:09.806 "dma_device_id": "system", 00:08:09.806 "dma_device_type": 1 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.806 "dma_device_type": 2 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "dma_device_id": "system", 00:08:09.806 "dma_device_type": 1 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.806 "dma_device_type": 2 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "dma_device_id": "system", 00:08:09.806 "dma_device_type": 1 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.806 "dma_device_type": 2 00:08:09.806 } 00:08:09.806 ], 00:08:09.806 "driver_specific": { 00:08:09.806 "raid": { 00:08:09.806 "uuid": "25a10f99-3a17-428c-9326-82509cfc08a3", 00:08:09.806 "strip_size_kb": 0, 00:08:09.806 "state": "online", 00:08:09.806 "raid_level": "raid1", 00:08:09.806 "superblock": false, 00:08:09.806 "num_base_bdevs": 3, 00:08:09.806 "num_base_bdevs_discovered": 3, 00:08:09.806 "num_base_bdevs_operational": 3, 00:08:09.806 "base_bdevs_list": [ 00:08:09.806 { 00:08:09.806 "name": "BaseBdev1", 00:08:09.806 "uuid": "7affa936-25e8-4343-9bbb-0c6b096e9a7a", 00:08:09.806 "is_configured": true, 00:08:09.806 "data_offset": 0, 00:08:09.806 "data_size": 65536 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "name": "BaseBdev2", 00:08:09.806 "uuid": "9229bad3-792f-44c2-82cc-71d057f34e76", 00:08:09.806 "is_configured": true, 00:08:09.806 "data_offset": 0, 00:08:09.806 "data_size": 65536 00:08:09.806 }, 00:08:09.806 { 00:08:09.806 "name": "BaseBdev3", 00:08:09.806 "uuid": "f7c3df05-1f5b-47a2-9d17-601d79d831eb", 00:08:09.806 "is_configured": true, 00:08:09.807 "data_offset": 0, 00:08:09.807 "data_size": 65536 00:08:09.807 } 00:08:09.807 ] 00:08:09.807 } 00:08:09.807 } 00:08:09.807 }' 00:08:09.807 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.069 BaseBdev2 00:08:10.069 BaseBdev3' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.069 [2024-10-30 09:42:48.566691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.069 "name": "Existed_Raid", 00:08:10.069 "uuid": "25a10f99-3a17-428c-9326-82509cfc08a3", 00:08:10.069 "strip_size_kb": 0, 00:08:10.069 "state": "online", 00:08:10.069 "raid_level": "raid1", 00:08:10.069 "superblock": false, 00:08:10.069 "num_base_bdevs": 3, 00:08:10.069 "num_base_bdevs_discovered": 2, 00:08:10.069 "num_base_bdevs_operational": 2, 00:08:10.069 "base_bdevs_list": [ 00:08:10.069 { 00:08:10.069 "name": null, 00:08:10.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.069 "is_configured": false, 00:08:10.069 "data_offset": 0, 00:08:10.069 "data_size": 65536 00:08:10.069 }, 00:08:10.069 { 00:08:10.069 "name": "BaseBdev2", 00:08:10.069 "uuid": "9229bad3-792f-44c2-82cc-71d057f34e76", 00:08:10.069 "is_configured": true, 00:08:10.069 "data_offset": 0, 00:08:10.069 "data_size": 65536 00:08:10.069 }, 00:08:10.069 { 00:08:10.069 "name": "BaseBdev3", 00:08:10.069 "uuid": "f7c3df05-1f5b-47a2-9d17-601d79d831eb", 00:08:10.069 "is_configured": true, 00:08:10.069 "data_offset": 0, 00:08:10.069 "data_size": 65536 00:08:10.069 } 00:08:10.069 ] 00:08:10.069 }' 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.069 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 [2024-10-30 09:42:48.998660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 [2024-10-30 09:42:49.097465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.642 [2024-10-30 09:42:49.097658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.642 [2024-10-30 09:42:49.157155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.642 [2024-10-30 09:42:49.157198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.642 [2024-10-30 09:42:49.157210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.642 BaseBdev2 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:10.642 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.643 [ 00:08:10.643 { 00:08:10.643 "name": "BaseBdev2", 00:08:10.643 "aliases": [ 00:08:10.643 "d4250a69-d7dc-448a-bdf4-49968e6179d3" 00:08:10.643 ], 00:08:10.643 "product_name": "Malloc disk", 00:08:10.643 "block_size": 512, 00:08:10.643 "num_blocks": 65536, 00:08:10.643 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:10.643 "assigned_rate_limits": { 00:08:10.643 "rw_ios_per_sec": 0, 00:08:10.643 "rw_mbytes_per_sec": 0, 00:08:10.643 "r_mbytes_per_sec": 0, 00:08:10.643 "w_mbytes_per_sec": 0 00:08:10.643 }, 00:08:10.643 "claimed": false, 00:08:10.643 "zoned": false, 00:08:10.643 "supported_io_types": { 00:08:10.643 "read": true, 00:08:10.643 "write": true, 00:08:10.643 "unmap": true, 00:08:10.643 "flush": true, 00:08:10.643 "reset": true, 00:08:10.643 "nvme_admin": false, 00:08:10.643 "nvme_io": false, 00:08:10.643 "nvme_io_md": false, 00:08:10.643 "write_zeroes": true, 00:08:10.643 "zcopy": true, 00:08:10.643 "get_zone_info": false, 00:08:10.643 "zone_management": false, 00:08:10.643 "zone_append": false, 00:08:10.643 "compare": false, 00:08:10.643 "compare_and_write": false, 00:08:10.643 "abort": true, 00:08:10.643 "seek_hole": false, 00:08:10.643 "seek_data": false, 00:08:10.643 "copy": true, 00:08:10.643 "nvme_iov_md": false 00:08:10.643 }, 00:08:10.643 "memory_domains": [ 00:08:10.643 { 00:08:10.643 "dma_device_id": "system", 00:08:10.643 "dma_device_type": 1 00:08:10.643 }, 00:08:10.643 { 00:08:10.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.643 "dma_device_type": 2 00:08:10.643 } 00:08:10.643 ], 00:08:10.643 "driver_specific": {} 00:08:10.643 } 00:08:10.643 ] 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.643 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 BaseBdev3 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 [ 00:08:10.904 { 00:08:10.904 "name": "BaseBdev3", 00:08:10.904 "aliases": [ 00:08:10.904 "c126337e-5422-4fc3-8c5c-0e1cf0d090a0" 00:08:10.904 ], 00:08:10.904 "product_name": "Malloc disk", 00:08:10.904 "block_size": 512, 00:08:10.904 "num_blocks": 65536, 00:08:10.904 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:10.904 "assigned_rate_limits": { 00:08:10.904 "rw_ios_per_sec": 0, 00:08:10.904 "rw_mbytes_per_sec": 0, 00:08:10.904 "r_mbytes_per_sec": 0, 00:08:10.904 "w_mbytes_per_sec": 0 00:08:10.904 }, 00:08:10.904 "claimed": false, 00:08:10.904 "zoned": false, 00:08:10.904 "supported_io_types": { 00:08:10.904 "read": true, 00:08:10.904 "write": true, 00:08:10.904 "unmap": true, 00:08:10.904 "flush": true, 00:08:10.904 "reset": true, 00:08:10.904 "nvme_admin": false, 00:08:10.904 "nvme_io": false, 00:08:10.904 "nvme_io_md": false, 00:08:10.904 "write_zeroes": true, 00:08:10.904 "zcopy": true, 00:08:10.904 "get_zone_info": false, 00:08:10.904 "zone_management": false, 00:08:10.904 "zone_append": false, 00:08:10.904 "compare": false, 00:08:10.904 "compare_and_write": false, 00:08:10.904 "abort": true, 00:08:10.904 "seek_hole": false, 00:08:10.904 "seek_data": false, 00:08:10.904 "copy": true, 00:08:10.904 "nvme_iov_md": false 00:08:10.904 }, 00:08:10.904 "memory_domains": [ 00:08:10.904 { 00:08:10.904 "dma_device_id": "system", 00:08:10.904 "dma_device_type": 1 00:08:10.904 }, 00:08:10.904 { 00:08:10.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.904 "dma_device_type": 2 00:08:10.904 } 00:08:10.904 ], 00:08:10.904 "driver_specific": {} 00:08:10.904 } 00:08:10.904 ] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 [2024-10-30 09:42:49.308504] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.904 [2024-10-30 09:42:49.308548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.904 [2024-10-30 09:42:49.308565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.904 [2024-10-30 09:42:49.310410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.904 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.904 "name": "Existed_Raid", 00:08:10.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.904 "strip_size_kb": 0, 00:08:10.904 "state": "configuring", 00:08:10.904 "raid_level": "raid1", 00:08:10.904 "superblock": false, 00:08:10.904 "num_base_bdevs": 3, 00:08:10.904 "num_base_bdevs_discovered": 2, 00:08:10.904 "num_base_bdevs_operational": 3, 00:08:10.904 "base_bdevs_list": [ 00:08:10.904 { 00:08:10.904 "name": "BaseBdev1", 00:08:10.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.904 "is_configured": false, 00:08:10.904 "data_offset": 0, 00:08:10.904 "data_size": 0 00:08:10.904 }, 00:08:10.904 { 00:08:10.904 "name": "BaseBdev2", 00:08:10.904 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:10.904 "is_configured": true, 00:08:10.904 "data_offset": 0, 00:08:10.904 "data_size": 65536 00:08:10.904 }, 00:08:10.904 { 00:08:10.904 "name": "BaseBdev3", 00:08:10.904 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:10.904 "is_configured": true, 00:08:10.904 "data_offset": 0, 00:08:10.904 "data_size": 65536 00:08:10.904 } 00:08:10.904 ] 00:08:10.905 }' 00:08:10.905 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.905 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.165 [2024-10-30 09:42:49.644610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.165 "name": "Existed_Raid", 00:08:11.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.165 "strip_size_kb": 0, 00:08:11.165 "state": "configuring", 00:08:11.165 "raid_level": "raid1", 00:08:11.165 "superblock": false, 00:08:11.165 "num_base_bdevs": 3, 00:08:11.165 "num_base_bdevs_discovered": 1, 00:08:11.165 "num_base_bdevs_operational": 3, 00:08:11.165 "base_bdevs_list": [ 00:08:11.165 { 00:08:11.165 "name": "BaseBdev1", 00:08:11.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.165 "is_configured": false, 00:08:11.165 "data_offset": 0, 00:08:11.165 "data_size": 0 00:08:11.165 }, 00:08:11.165 { 00:08:11.165 "name": null, 00:08:11.165 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:11.165 "is_configured": false, 00:08:11.165 "data_offset": 0, 00:08:11.165 "data_size": 65536 00:08:11.165 }, 00:08:11.165 { 00:08:11.165 "name": "BaseBdev3", 00:08:11.165 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:11.165 "is_configured": true, 00:08:11.165 "data_offset": 0, 00:08:11.165 "data_size": 65536 00:08:11.165 } 00:08:11.165 ] 00:08:11.165 }' 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.165 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.426 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 [2024-10-30 09:42:50.026853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.426 BaseBdev1 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.426 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.687 [ 00:08:11.687 { 00:08:11.687 "name": "BaseBdev1", 00:08:11.687 "aliases": [ 00:08:11.687 "1e3c3c45-28fe-4427-88c1-3bc77e276790" 00:08:11.687 ], 00:08:11.687 "product_name": "Malloc disk", 00:08:11.687 "block_size": 512, 00:08:11.687 "num_blocks": 65536, 00:08:11.687 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:11.687 "assigned_rate_limits": { 00:08:11.687 "rw_ios_per_sec": 0, 00:08:11.687 "rw_mbytes_per_sec": 0, 00:08:11.687 "r_mbytes_per_sec": 0, 00:08:11.687 "w_mbytes_per_sec": 0 00:08:11.687 }, 00:08:11.687 "claimed": true, 00:08:11.687 "claim_type": "exclusive_write", 00:08:11.687 "zoned": false, 00:08:11.687 "supported_io_types": { 00:08:11.687 "read": true, 00:08:11.687 "write": true, 00:08:11.687 "unmap": true, 00:08:11.687 "flush": true, 00:08:11.687 "reset": true, 00:08:11.687 "nvme_admin": false, 00:08:11.687 "nvme_io": false, 00:08:11.687 "nvme_io_md": false, 00:08:11.687 "write_zeroes": true, 00:08:11.687 "zcopy": true, 00:08:11.687 "get_zone_info": false, 00:08:11.687 "zone_management": false, 00:08:11.687 "zone_append": false, 00:08:11.687 "compare": false, 00:08:11.687 "compare_and_write": false, 00:08:11.687 "abort": true, 00:08:11.687 "seek_hole": false, 00:08:11.687 "seek_data": false, 00:08:11.687 "copy": true, 00:08:11.687 "nvme_iov_md": false 00:08:11.687 }, 00:08:11.687 "memory_domains": [ 00:08:11.687 { 00:08:11.687 "dma_device_id": "system", 00:08:11.687 "dma_device_type": 1 00:08:11.687 }, 00:08:11.687 { 00:08:11.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.687 "dma_device_type": 2 00:08:11.687 } 00:08:11.687 ], 00:08:11.687 "driver_specific": {} 00:08:11.687 } 00:08:11.687 ] 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.687 "name": "Existed_Raid", 00:08:11.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.687 "strip_size_kb": 0, 00:08:11.687 "state": "configuring", 00:08:11.687 "raid_level": "raid1", 00:08:11.687 "superblock": false, 00:08:11.687 "num_base_bdevs": 3, 00:08:11.687 "num_base_bdevs_discovered": 2, 00:08:11.687 "num_base_bdevs_operational": 3, 00:08:11.687 "base_bdevs_list": [ 00:08:11.687 { 00:08:11.687 "name": "BaseBdev1", 00:08:11.687 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:11.687 "is_configured": true, 00:08:11.687 "data_offset": 0, 00:08:11.687 "data_size": 65536 00:08:11.687 }, 00:08:11.687 { 00:08:11.687 "name": null, 00:08:11.687 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:11.687 "is_configured": false, 00:08:11.687 "data_offset": 0, 00:08:11.687 "data_size": 65536 00:08:11.687 }, 00:08:11.687 { 00:08:11.687 "name": "BaseBdev3", 00:08:11.687 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:11.687 "is_configured": true, 00:08:11.687 "data_offset": 0, 00:08:11.687 "data_size": 65536 00:08:11.687 } 00:08:11.687 ] 00:08:11.687 }' 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.687 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.947 [2024-10-30 09:42:50.422998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.947 "name": "Existed_Raid", 00:08:11.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.947 "strip_size_kb": 0, 00:08:11.947 "state": "configuring", 00:08:11.947 "raid_level": "raid1", 00:08:11.947 "superblock": false, 00:08:11.947 "num_base_bdevs": 3, 00:08:11.947 "num_base_bdevs_discovered": 1, 00:08:11.947 "num_base_bdevs_operational": 3, 00:08:11.947 "base_bdevs_list": [ 00:08:11.947 { 00:08:11.947 "name": "BaseBdev1", 00:08:11.947 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:11.947 "is_configured": true, 00:08:11.947 "data_offset": 0, 00:08:11.947 "data_size": 65536 00:08:11.947 }, 00:08:11.947 { 00:08:11.947 "name": null, 00:08:11.947 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:11.947 "is_configured": false, 00:08:11.947 "data_offset": 0, 00:08:11.947 "data_size": 65536 00:08:11.947 }, 00:08:11.947 { 00:08:11.947 "name": null, 00:08:11.947 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:11.947 "is_configured": false, 00:08:11.947 "data_offset": 0, 00:08:11.947 "data_size": 65536 00:08:11.947 } 00:08:11.947 ] 00:08:11.947 }' 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.947 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.300 [2024-10-30 09:42:50.819147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.300 "name": "Existed_Raid", 00:08:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.300 "strip_size_kb": 0, 00:08:12.300 "state": "configuring", 00:08:12.300 "raid_level": "raid1", 00:08:12.300 "superblock": false, 00:08:12.300 "num_base_bdevs": 3, 00:08:12.300 "num_base_bdevs_discovered": 2, 00:08:12.300 "num_base_bdevs_operational": 3, 00:08:12.300 "base_bdevs_list": [ 00:08:12.300 { 00:08:12.300 "name": "BaseBdev1", 00:08:12.300 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:12.300 "is_configured": true, 00:08:12.300 "data_offset": 0, 00:08:12.300 "data_size": 65536 00:08:12.300 }, 00:08:12.300 { 00:08:12.300 "name": null, 00:08:12.300 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:12.300 "is_configured": false, 00:08:12.300 "data_offset": 0, 00:08:12.300 "data_size": 65536 00:08:12.300 }, 00:08:12.300 { 00:08:12.300 "name": "BaseBdev3", 00:08:12.300 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:12.300 "is_configured": true, 00:08:12.300 "data_offset": 0, 00:08:12.300 "data_size": 65536 00:08:12.300 } 00:08:12.300 ] 00:08:12.300 }' 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.300 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.563 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.563 [2024-10-30 09:42:51.163303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.824 "name": "Existed_Raid", 00:08:12.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.824 "strip_size_kb": 0, 00:08:12.824 "state": "configuring", 00:08:12.824 "raid_level": "raid1", 00:08:12.824 "superblock": false, 00:08:12.824 "num_base_bdevs": 3, 00:08:12.824 "num_base_bdevs_discovered": 1, 00:08:12.824 "num_base_bdevs_operational": 3, 00:08:12.824 "base_bdevs_list": [ 00:08:12.824 { 00:08:12.824 "name": null, 00:08:12.824 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:12.824 "is_configured": false, 00:08:12.824 "data_offset": 0, 00:08:12.824 "data_size": 65536 00:08:12.824 }, 00:08:12.824 { 00:08:12.824 "name": null, 00:08:12.824 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:12.824 "is_configured": false, 00:08:12.824 "data_offset": 0, 00:08:12.824 "data_size": 65536 00:08:12.824 }, 00:08:12.824 { 00:08:12.824 "name": "BaseBdev3", 00:08:12.824 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:12.824 "is_configured": true, 00:08:12.824 "data_offset": 0, 00:08:12.824 "data_size": 65536 00:08:12.824 } 00:08:12.824 ] 00:08:12.824 }' 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.824 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.087 [2024-10-30 09:42:51.587894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.087 "name": "Existed_Raid", 00:08:13.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.087 "strip_size_kb": 0, 00:08:13.087 "state": "configuring", 00:08:13.087 "raid_level": "raid1", 00:08:13.087 "superblock": false, 00:08:13.087 "num_base_bdevs": 3, 00:08:13.087 "num_base_bdevs_discovered": 2, 00:08:13.087 "num_base_bdevs_operational": 3, 00:08:13.087 "base_bdevs_list": [ 00:08:13.087 { 00:08:13.087 "name": null, 00:08:13.087 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:13.087 "is_configured": false, 00:08:13.087 "data_offset": 0, 00:08:13.087 "data_size": 65536 00:08:13.087 }, 00:08:13.087 { 00:08:13.087 "name": "BaseBdev2", 00:08:13.087 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:13.087 "is_configured": true, 00:08:13.087 "data_offset": 0, 00:08:13.087 "data_size": 65536 00:08:13.087 }, 00:08:13.087 { 00:08:13.087 "name": "BaseBdev3", 00:08:13.087 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:13.087 "is_configured": true, 00:08:13.087 "data_offset": 0, 00:08:13.087 "data_size": 65536 00:08:13.087 } 00:08:13.087 ] 00:08:13.087 }' 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.087 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:13.349 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.611 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e3c3c45-28fe-4427-88c1-3bc77e276790 00:08:13.611 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.611 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.611 [2024-10-30 09:42:52.010475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:13.611 [2024-10-30 09:42:52.010518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:13.611 [2024-10-30 09:42:52.010526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:13.611 [2024-10-30 09:42:52.010773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.611 [2024-10-30 09:42:52.010908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:13.611 [2024-10-30 09:42:52.010919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:13.611 [2024-10-30 09:42:52.011157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.611 NewBaseBdev 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.611 [ 00:08:13.611 { 00:08:13.611 "name": "NewBaseBdev", 00:08:13.611 "aliases": [ 00:08:13.611 "1e3c3c45-28fe-4427-88c1-3bc77e276790" 00:08:13.611 ], 00:08:13.611 "product_name": "Malloc disk", 00:08:13.611 "block_size": 512, 00:08:13.611 "num_blocks": 65536, 00:08:13.611 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:13.611 "assigned_rate_limits": { 00:08:13.611 "rw_ios_per_sec": 0, 00:08:13.611 "rw_mbytes_per_sec": 0, 00:08:13.611 "r_mbytes_per_sec": 0, 00:08:13.611 "w_mbytes_per_sec": 0 00:08:13.611 }, 00:08:13.611 "claimed": true, 00:08:13.611 "claim_type": "exclusive_write", 00:08:13.611 "zoned": false, 00:08:13.611 "supported_io_types": { 00:08:13.611 "read": true, 00:08:13.611 "write": true, 00:08:13.611 "unmap": true, 00:08:13.611 "flush": true, 00:08:13.611 "reset": true, 00:08:13.611 "nvme_admin": false, 00:08:13.611 "nvme_io": false, 00:08:13.611 "nvme_io_md": false, 00:08:13.611 "write_zeroes": true, 00:08:13.611 "zcopy": true, 00:08:13.611 "get_zone_info": false, 00:08:13.611 "zone_management": false, 00:08:13.611 "zone_append": false, 00:08:13.611 "compare": false, 00:08:13.611 "compare_and_write": false, 00:08:13.611 "abort": true, 00:08:13.611 "seek_hole": false, 00:08:13.611 "seek_data": false, 00:08:13.611 "copy": true, 00:08:13.611 "nvme_iov_md": false 00:08:13.611 }, 00:08:13.611 "memory_domains": [ 00:08:13.611 { 00:08:13.611 "dma_device_id": "system", 00:08:13.611 "dma_device_type": 1 00:08:13.611 }, 00:08:13.611 { 00:08:13.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.611 "dma_device_type": 2 00:08:13.611 } 00:08:13.611 ], 00:08:13.611 "driver_specific": {} 00:08:13.611 } 00:08:13.611 ] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.611 "name": "Existed_Raid", 00:08:13.611 "uuid": "bb8bf102-5b3d-4454-8723-387802e1906b", 00:08:13.611 "strip_size_kb": 0, 00:08:13.611 "state": "online", 00:08:13.611 "raid_level": "raid1", 00:08:13.611 "superblock": false, 00:08:13.611 "num_base_bdevs": 3, 00:08:13.611 "num_base_bdevs_discovered": 3, 00:08:13.611 "num_base_bdevs_operational": 3, 00:08:13.611 "base_bdevs_list": [ 00:08:13.611 { 00:08:13.611 "name": "NewBaseBdev", 00:08:13.611 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:13.611 "is_configured": true, 00:08:13.611 "data_offset": 0, 00:08:13.611 "data_size": 65536 00:08:13.611 }, 00:08:13.611 { 00:08:13.611 "name": "BaseBdev2", 00:08:13.611 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:13.611 "is_configured": true, 00:08:13.611 "data_offset": 0, 00:08:13.611 "data_size": 65536 00:08:13.611 }, 00:08:13.611 { 00:08:13.611 "name": "BaseBdev3", 00:08:13.611 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:13.611 "is_configured": true, 00:08:13.611 "data_offset": 0, 00:08:13.611 "data_size": 65536 00:08:13.611 } 00:08:13.611 ] 00:08:13.611 }' 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.611 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.873 [2024-10-30 09:42:52.358937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.873 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.873 "name": "Existed_Raid", 00:08:13.873 "aliases": [ 00:08:13.873 "bb8bf102-5b3d-4454-8723-387802e1906b" 00:08:13.873 ], 00:08:13.873 "product_name": "Raid Volume", 00:08:13.873 "block_size": 512, 00:08:13.873 "num_blocks": 65536, 00:08:13.873 "uuid": "bb8bf102-5b3d-4454-8723-387802e1906b", 00:08:13.873 "assigned_rate_limits": { 00:08:13.873 "rw_ios_per_sec": 0, 00:08:13.873 "rw_mbytes_per_sec": 0, 00:08:13.873 "r_mbytes_per_sec": 0, 00:08:13.873 "w_mbytes_per_sec": 0 00:08:13.873 }, 00:08:13.873 "claimed": false, 00:08:13.873 "zoned": false, 00:08:13.873 "supported_io_types": { 00:08:13.873 "read": true, 00:08:13.873 "write": true, 00:08:13.873 "unmap": false, 00:08:13.873 "flush": false, 00:08:13.873 "reset": true, 00:08:13.873 "nvme_admin": false, 00:08:13.873 "nvme_io": false, 00:08:13.873 "nvme_io_md": false, 00:08:13.873 "write_zeroes": true, 00:08:13.873 "zcopy": false, 00:08:13.873 "get_zone_info": false, 00:08:13.873 "zone_management": false, 00:08:13.873 "zone_append": false, 00:08:13.873 "compare": false, 00:08:13.873 "compare_and_write": false, 00:08:13.873 "abort": false, 00:08:13.873 "seek_hole": false, 00:08:13.873 "seek_data": false, 00:08:13.873 "copy": false, 00:08:13.873 "nvme_iov_md": false 00:08:13.873 }, 00:08:13.873 "memory_domains": [ 00:08:13.873 { 00:08:13.873 "dma_device_id": "system", 00:08:13.873 "dma_device_type": 1 00:08:13.873 }, 00:08:13.873 { 00:08:13.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.873 "dma_device_type": 2 00:08:13.873 }, 00:08:13.873 { 00:08:13.873 "dma_device_id": "system", 00:08:13.873 "dma_device_type": 1 00:08:13.873 }, 00:08:13.873 { 00:08:13.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.873 "dma_device_type": 2 00:08:13.873 }, 00:08:13.873 { 00:08:13.873 "dma_device_id": "system", 00:08:13.873 "dma_device_type": 1 00:08:13.873 }, 00:08:13.873 { 00:08:13.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.873 "dma_device_type": 2 00:08:13.873 } 00:08:13.873 ], 00:08:13.873 "driver_specific": { 00:08:13.873 "raid": { 00:08:13.873 "uuid": "bb8bf102-5b3d-4454-8723-387802e1906b", 00:08:13.873 "strip_size_kb": 0, 00:08:13.873 "state": "online", 00:08:13.873 "raid_level": "raid1", 00:08:13.873 "superblock": false, 00:08:13.873 "num_base_bdevs": 3, 00:08:13.873 "num_base_bdevs_discovered": 3, 00:08:13.873 "num_base_bdevs_operational": 3, 00:08:13.873 "base_bdevs_list": [ 00:08:13.873 { 00:08:13.873 "name": "NewBaseBdev", 00:08:13.873 "uuid": "1e3c3c45-28fe-4427-88c1-3bc77e276790", 00:08:13.873 "is_configured": true, 00:08:13.873 "data_offset": 0, 00:08:13.873 "data_size": 65536 00:08:13.873 }, 00:08:13.874 { 00:08:13.874 "name": "BaseBdev2", 00:08:13.874 "uuid": "d4250a69-d7dc-448a-bdf4-49968e6179d3", 00:08:13.874 "is_configured": true, 00:08:13.874 "data_offset": 0, 00:08:13.874 "data_size": 65536 00:08:13.874 }, 00:08:13.874 { 00:08:13.874 "name": "BaseBdev3", 00:08:13.874 "uuid": "c126337e-5422-4fc3-8c5c-0e1cf0d090a0", 00:08:13.874 "is_configured": true, 00:08:13.874 "data_offset": 0, 00:08:13.874 "data_size": 65536 00:08:13.874 } 00:08:13.874 ] 00:08:13.874 } 00:08:13.874 } 00:08:13.874 }' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:13.874 BaseBdev2 00:08:13.874 BaseBdev3' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.874 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.135 [2024-10-30 09:42:52.562652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.135 [2024-10-30 09:42:52.562680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.135 [2024-10-30 09:42:52.562750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.135 [2024-10-30 09:42:52.563033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.135 [2024-10-30 09:42:52.563043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65887 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65887 ']' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65887 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65887 00:08:14.135 killing process with pid 65887 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65887' 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65887 00:08:14.135 [2024-10-30 09:42:52.591526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.135 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65887 00:08:14.396 [2024-10-30 09:42:52.779603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:15.005 00:08:15.005 real 0m7.812s 00:08:15.005 user 0m12.453s 00:08:15.005 sys 0m1.238s 00:08:15.005 ************************************ 00:08:15.005 END TEST raid_state_function_test 00:08:15.005 ************************************ 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.005 09:42:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:15.005 09:42:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:15.005 09:42:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.005 09:42:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.005 ************************************ 00:08:15.005 START TEST raid_state_function_test_sb 00:08:15.005 ************************************ 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:15.005 Process raid pid: 66479 00:08:15.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66479 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66479' 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66479 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66479 ']' 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.005 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.005 [2024-10-30 09:42:53.610045] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:15.005 [2024-10-30 09:42:53.610172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.268 [2024-10-30 09:42:53.771724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.268 [2024-10-30 09:42:53.871604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.529 [2024-10-30 09:42:54.009092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.530 [2024-10-30 09:42:54.009248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.101 [2024-10-30 09:42:54.466819] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.101 [2024-10-30 09:42:54.466875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.101 [2024-10-30 09:42:54.466885] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.101 [2024-10-30 09:42:54.466895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.101 [2024-10-30 09:42:54.466903] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.101 [2024-10-30 09:42:54.466913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.101 "name": "Existed_Raid", 00:08:16.101 "uuid": "71230e1e-f6a1-4f32-acc7-80ccc930aed0", 00:08:16.101 "strip_size_kb": 0, 00:08:16.101 "state": "configuring", 00:08:16.101 "raid_level": "raid1", 00:08:16.101 "superblock": true, 00:08:16.101 "num_base_bdevs": 3, 00:08:16.101 "num_base_bdevs_discovered": 0, 00:08:16.101 "num_base_bdevs_operational": 3, 00:08:16.101 "base_bdevs_list": [ 00:08:16.101 { 00:08:16.101 "name": "BaseBdev1", 00:08:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.101 "is_configured": false, 00:08:16.101 "data_offset": 0, 00:08:16.101 "data_size": 0 00:08:16.101 }, 00:08:16.101 { 00:08:16.101 "name": "BaseBdev2", 00:08:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.101 "is_configured": false, 00:08:16.101 "data_offset": 0, 00:08:16.101 "data_size": 0 00:08:16.101 }, 00:08:16.101 { 00:08:16.101 "name": "BaseBdev3", 00:08:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.101 "is_configured": false, 00:08:16.101 "data_offset": 0, 00:08:16.101 "data_size": 0 00:08:16.101 } 00:08:16.101 ] 00:08:16.101 }' 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.101 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [2024-10-30 09:42:54.778816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.362 [2024-10-30 09:42:54.778846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [2024-10-30 09:42:54.786825] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.362 [2024-10-30 09:42:54.786953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.362 [2024-10-30 09:42:54.787010] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.362 [2024-10-30 09:42:54.787037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.362 [2024-10-30 09:42:54.787055] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.362 [2024-10-30 09:42:54.787085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [2024-10-30 09:42:54.819235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.362 BaseBdev1 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 [ 00:08:16.362 { 00:08:16.362 "name": "BaseBdev1", 00:08:16.362 "aliases": [ 00:08:16.362 "8a197da4-9e0c-4016-9b40-486e51a05d66" 00:08:16.362 ], 00:08:16.362 "product_name": "Malloc disk", 00:08:16.362 "block_size": 512, 00:08:16.362 "num_blocks": 65536, 00:08:16.362 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:16.362 "assigned_rate_limits": { 00:08:16.362 "rw_ios_per_sec": 0, 00:08:16.362 "rw_mbytes_per_sec": 0, 00:08:16.362 "r_mbytes_per_sec": 0, 00:08:16.362 "w_mbytes_per_sec": 0 00:08:16.362 }, 00:08:16.362 "claimed": true, 00:08:16.362 "claim_type": "exclusive_write", 00:08:16.362 "zoned": false, 00:08:16.362 "supported_io_types": { 00:08:16.362 "read": true, 00:08:16.362 "write": true, 00:08:16.362 "unmap": true, 00:08:16.362 "flush": true, 00:08:16.362 "reset": true, 00:08:16.362 "nvme_admin": false, 00:08:16.362 "nvme_io": false, 00:08:16.362 "nvme_io_md": false, 00:08:16.362 "write_zeroes": true, 00:08:16.362 "zcopy": true, 00:08:16.362 "get_zone_info": false, 00:08:16.362 "zone_management": false, 00:08:16.362 "zone_append": false, 00:08:16.362 "compare": false, 00:08:16.362 "compare_and_write": false, 00:08:16.362 "abort": true, 00:08:16.362 "seek_hole": false, 00:08:16.362 "seek_data": false, 00:08:16.362 "copy": true, 00:08:16.362 "nvme_iov_md": false 00:08:16.362 }, 00:08:16.362 "memory_domains": [ 00:08:16.362 { 00:08:16.362 "dma_device_id": "system", 00:08:16.362 "dma_device_type": 1 00:08:16.362 }, 00:08:16.362 { 00:08:16.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.362 "dma_device_type": 2 00:08:16.362 } 00:08:16.362 ], 00:08:16.362 "driver_specific": {} 00:08:16.362 } 00:08:16.362 ] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.362 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.363 "name": "Existed_Raid", 00:08:16.363 "uuid": "0cd8e616-6d27-4cd3-8ae7-19e1e2e860f7", 00:08:16.363 "strip_size_kb": 0, 00:08:16.363 "state": "configuring", 00:08:16.363 "raid_level": "raid1", 00:08:16.363 "superblock": true, 00:08:16.363 "num_base_bdevs": 3, 00:08:16.363 "num_base_bdevs_discovered": 1, 00:08:16.363 "num_base_bdevs_operational": 3, 00:08:16.363 "base_bdevs_list": [ 00:08:16.363 { 00:08:16.363 "name": "BaseBdev1", 00:08:16.363 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:16.363 "is_configured": true, 00:08:16.363 "data_offset": 2048, 00:08:16.363 "data_size": 63488 00:08:16.363 }, 00:08:16.363 { 00:08:16.363 "name": "BaseBdev2", 00:08:16.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.363 "is_configured": false, 00:08:16.363 "data_offset": 0, 00:08:16.363 "data_size": 0 00:08:16.363 }, 00:08:16.363 { 00:08:16.363 "name": "BaseBdev3", 00:08:16.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.363 "is_configured": false, 00:08:16.363 "data_offset": 0, 00:08:16.363 "data_size": 0 00:08:16.363 } 00:08:16.363 ] 00:08:16.363 }' 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.363 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.623 [2024-10-30 09:42:55.175370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.623 [2024-10-30 09:42:55.175531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.623 [2024-10-30 09:42:55.183449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.623 [2024-10-30 09:42:55.185315] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.623 [2024-10-30 09:42:55.185355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.623 [2024-10-30 09:42:55.185364] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.623 [2024-10-30 09:42:55.185373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.623 "name": "Existed_Raid", 00:08:16.623 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:16.623 "strip_size_kb": 0, 00:08:16.623 "state": "configuring", 00:08:16.623 "raid_level": "raid1", 00:08:16.623 "superblock": true, 00:08:16.623 "num_base_bdevs": 3, 00:08:16.623 "num_base_bdevs_discovered": 1, 00:08:16.623 "num_base_bdevs_operational": 3, 00:08:16.623 "base_bdevs_list": [ 00:08:16.623 { 00:08:16.623 "name": "BaseBdev1", 00:08:16.623 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:16.623 "is_configured": true, 00:08:16.623 "data_offset": 2048, 00:08:16.623 "data_size": 63488 00:08:16.623 }, 00:08:16.623 { 00:08:16.623 "name": "BaseBdev2", 00:08:16.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.623 "is_configured": false, 00:08:16.623 "data_offset": 0, 00:08:16.623 "data_size": 0 00:08:16.623 }, 00:08:16.623 { 00:08:16.623 "name": "BaseBdev3", 00:08:16.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.623 "is_configured": false, 00:08:16.623 "data_offset": 0, 00:08:16.623 "data_size": 0 00:08:16.623 } 00:08:16.623 ] 00:08:16.623 }' 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.623 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 [2024-10-30 09:42:55.538309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.195 BaseBdev2 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 [ 00:08:17.195 { 00:08:17.195 "name": "BaseBdev2", 00:08:17.195 "aliases": [ 00:08:17.195 "16403125-38ee-4312-97b5-b7d9063803d1" 00:08:17.195 ], 00:08:17.195 "product_name": "Malloc disk", 00:08:17.195 "block_size": 512, 00:08:17.195 "num_blocks": 65536, 00:08:17.195 "uuid": "16403125-38ee-4312-97b5-b7d9063803d1", 00:08:17.195 "assigned_rate_limits": { 00:08:17.195 "rw_ios_per_sec": 0, 00:08:17.195 "rw_mbytes_per_sec": 0, 00:08:17.195 "r_mbytes_per_sec": 0, 00:08:17.195 "w_mbytes_per_sec": 0 00:08:17.195 }, 00:08:17.195 "claimed": true, 00:08:17.195 "claim_type": "exclusive_write", 00:08:17.195 "zoned": false, 00:08:17.195 "supported_io_types": { 00:08:17.195 "read": true, 00:08:17.195 "write": true, 00:08:17.195 "unmap": true, 00:08:17.195 "flush": true, 00:08:17.195 "reset": true, 00:08:17.195 "nvme_admin": false, 00:08:17.195 "nvme_io": false, 00:08:17.195 "nvme_io_md": false, 00:08:17.195 "write_zeroes": true, 00:08:17.195 "zcopy": true, 00:08:17.195 "get_zone_info": false, 00:08:17.195 "zone_management": false, 00:08:17.195 "zone_append": false, 00:08:17.195 "compare": false, 00:08:17.195 "compare_and_write": false, 00:08:17.195 "abort": true, 00:08:17.195 "seek_hole": false, 00:08:17.195 "seek_data": false, 00:08:17.195 "copy": true, 00:08:17.195 "nvme_iov_md": false 00:08:17.195 }, 00:08:17.195 "memory_domains": [ 00:08:17.195 { 00:08:17.195 "dma_device_id": "system", 00:08:17.195 "dma_device_type": 1 00:08:17.195 }, 00:08:17.195 { 00:08:17.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.195 "dma_device_type": 2 00:08:17.195 } 00:08:17.195 ], 00:08:17.195 "driver_specific": {} 00:08:17.195 } 00:08:17.195 ] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.195 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.195 "name": "Existed_Raid", 00:08:17.195 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:17.195 "strip_size_kb": 0, 00:08:17.195 "state": "configuring", 00:08:17.195 "raid_level": "raid1", 00:08:17.195 "superblock": true, 00:08:17.195 "num_base_bdevs": 3, 00:08:17.195 "num_base_bdevs_discovered": 2, 00:08:17.195 "num_base_bdevs_operational": 3, 00:08:17.195 "base_bdevs_list": [ 00:08:17.195 { 00:08:17.195 "name": "BaseBdev1", 00:08:17.195 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:17.195 "is_configured": true, 00:08:17.195 "data_offset": 2048, 00:08:17.195 "data_size": 63488 00:08:17.195 }, 00:08:17.195 { 00:08:17.195 "name": "BaseBdev2", 00:08:17.195 "uuid": "16403125-38ee-4312-97b5-b7d9063803d1", 00:08:17.196 "is_configured": true, 00:08:17.196 "data_offset": 2048, 00:08:17.196 "data_size": 63488 00:08:17.196 }, 00:08:17.196 { 00:08:17.196 "name": "BaseBdev3", 00:08:17.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.196 "is_configured": false, 00:08:17.196 "data_offset": 0, 00:08:17.196 "data_size": 0 00:08:17.196 } 00:08:17.196 ] 00:08:17.196 }' 00:08:17.196 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.196 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.455 [2024-10-30 09:42:55.934558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.455 [2024-10-30 09:42:55.934802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.455 [2024-10-30 09:42:55.934821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.455 [2024-10-30 09:42:55.935098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:17.455 [2024-10-30 09:42:55.935233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.455 [2024-10-30 09:42:55.935248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.455 BaseBdev3 00:08:17.455 [2024-10-30 09:42:55.935385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.455 [ 00:08:17.455 { 00:08:17.455 "name": "BaseBdev3", 00:08:17.455 "aliases": [ 00:08:17.455 "143d4c53-1613-4da7-a952-ca60508aed0b" 00:08:17.455 ], 00:08:17.455 "product_name": "Malloc disk", 00:08:17.455 "block_size": 512, 00:08:17.455 "num_blocks": 65536, 00:08:17.455 "uuid": "143d4c53-1613-4da7-a952-ca60508aed0b", 00:08:17.455 "assigned_rate_limits": { 00:08:17.455 "rw_ios_per_sec": 0, 00:08:17.455 "rw_mbytes_per_sec": 0, 00:08:17.455 "r_mbytes_per_sec": 0, 00:08:17.455 "w_mbytes_per_sec": 0 00:08:17.455 }, 00:08:17.455 "claimed": true, 00:08:17.455 "claim_type": "exclusive_write", 00:08:17.455 "zoned": false, 00:08:17.455 "supported_io_types": { 00:08:17.455 "read": true, 00:08:17.455 "write": true, 00:08:17.455 "unmap": true, 00:08:17.455 "flush": true, 00:08:17.455 "reset": true, 00:08:17.455 "nvme_admin": false, 00:08:17.455 "nvme_io": false, 00:08:17.455 "nvme_io_md": false, 00:08:17.455 "write_zeroes": true, 00:08:17.455 "zcopy": true, 00:08:17.455 "get_zone_info": false, 00:08:17.455 "zone_management": false, 00:08:17.455 "zone_append": false, 00:08:17.455 "compare": false, 00:08:17.455 "compare_and_write": false, 00:08:17.455 "abort": true, 00:08:17.455 "seek_hole": false, 00:08:17.455 "seek_data": false, 00:08:17.455 "copy": true, 00:08:17.455 "nvme_iov_md": false 00:08:17.455 }, 00:08:17.455 "memory_domains": [ 00:08:17.455 { 00:08:17.455 "dma_device_id": "system", 00:08:17.455 "dma_device_type": 1 00:08:17.455 }, 00:08:17.455 { 00:08:17.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.455 "dma_device_type": 2 00:08:17.455 } 00:08:17.455 ], 00:08:17.455 "driver_specific": {} 00:08:17.455 } 00:08:17.455 ] 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.455 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.456 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.456 "name": "Existed_Raid", 00:08:17.456 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:17.456 "strip_size_kb": 0, 00:08:17.456 "state": "online", 00:08:17.456 "raid_level": "raid1", 00:08:17.456 "superblock": true, 00:08:17.456 "num_base_bdevs": 3, 00:08:17.456 "num_base_bdevs_discovered": 3, 00:08:17.456 "num_base_bdevs_operational": 3, 00:08:17.456 "base_bdevs_list": [ 00:08:17.456 { 00:08:17.456 "name": "BaseBdev1", 00:08:17.456 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:17.456 "is_configured": true, 00:08:17.456 "data_offset": 2048, 00:08:17.456 "data_size": 63488 00:08:17.456 }, 00:08:17.456 { 00:08:17.456 "name": "BaseBdev2", 00:08:17.456 "uuid": "16403125-38ee-4312-97b5-b7d9063803d1", 00:08:17.456 "is_configured": true, 00:08:17.456 "data_offset": 2048, 00:08:17.456 "data_size": 63488 00:08:17.456 }, 00:08:17.456 { 00:08:17.456 "name": "BaseBdev3", 00:08:17.456 "uuid": "143d4c53-1613-4da7-a952-ca60508aed0b", 00:08:17.456 "is_configured": true, 00:08:17.456 "data_offset": 2048, 00:08:17.456 "data_size": 63488 00:08:17.456 } 00:08:17.456 ] 00:08:17.456 }' 00:08:17.456 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.456 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.716 [2024-10-30 09:42:56.287014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.716 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.716 "name": "Existed_Raid", 00:08:17.716 "aliases": [ 00:08:17.716 "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead" 00:08:17.716 ], 00:08:17.716 "product_name": "Raid Volume", 00:08:17.716 "block_size": 512, 00:08:17.716 "num_blocks": 63488, 00:08:17.716 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:17.716 "assigned_rate_limits": { 00:08:17.716 "rw_ios_per_sec": 0, 00:08:17.716 "rw_mbytes_per_sec": 0, 00:08:17.717 "r_mbytes_per_sec": 0, 00:08:17.717 "w_mbytes_per_sec": 0 00:08:17.717 }, 00:08:17.717 "claimed": false, 00:08:17.717 "zoned": false, 00:08:17.717 "supported_io_types": { 00:08:17.717 "read": true, 00:08:17.717 "write": true, 00:08:17.717 "unmap": false, 00:08:17.717 "flush": false, 00:08:17.717 "reset": true, 00:08:17.717 "nvme_admin": false, 00:08:17.717 "nvme_io": false, 00:08:17.717 "nvme_io_md": false, 00:08:17.717 "write_zeroes": true, 00:08:17.717 "zcopy": false, 00:08:17.717 "get_zone_info": false, 00:08:17.717 "zone_management": false, 00:08:17.717 "zone_append": false, 00:08:17.717 "compare": false, 00:08:17.717 "compare_and_write": false, 00:08:17.717 "abort": false, 00:08:17.717 "seek_hole": false, 00:08:17.717 "seek_data": false, 00:08:17.717 "copy": false, 00:08:17.717 "nvme_iov_md": false 00:08:17.717 }, 00:08:17.717 "memory_domains": [ 00:08:17.717 { 00:08:17.717 "dma_device_id": "system", 00:08:17.717 "dma_device_type": 1 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.717 "dma_device_type": 2 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "dma_device_id": "system", 00:08:17.717 "dma_device_type": 1 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.717 "dma_device_type": 2 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "dma_device_id": "system", 00:08:17.717 "dma_device_type": 1 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.717 "dma_device_type": 2 00:08:17.717 } 00:08:17.717 ], 00:08:17.717 "driver_specific": { 00:08:17.717 "raid": { 00:08:17.717 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:17.717 "strip_size_kb": 0, 00:08:17.717 "state": "online", 00:08:17.717 "raid_level": "raid1", 00:08:17.717 "superblock": true, 00:08:17.717 "num_base_bdevs": 3, 00:08:17.717 "num_base_bdevs_discovered": 3, 00:08:17.717 "num_base_bdevs_operational": 3, 00:08:17.717 "base_bdevs_list": [ 00:08:17.717 { 00:08:17.717 "name": "BaseBdev1", 00:08:17.717 "uuid": "8a197da4-9e0c-4016-9b40-486e51a05d66", 00:08:17.717 "is_configured": true, 00:08:17.717 "data_offset": 2048, 00:08:17.717 "data_size": 63488 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "name": "BaseBdev2", 00:08:17.717 "uuid": "16403125-38ee-4312-97b5-b7d9063803d1", 00:08:17.717 "is_configured": true, 00:08:17.717 "data_offset": 2048, 00:08:17.717 "data_size": 63488 00:08:17.717 }, 00:08:17.717 { 00:08:17.717 "name": "BaseBdev3", 00:08:17.717 "uuid": "143d4c53-1613-4da7-a952-ca60508aed0b", 00:08:17.717 "is_configured": true, 00:08:17.717 "data_offset": 2048, 00:08:17.717 "data_size": 63488 00:08:17.717 } 00:08:17.717 ] 00:08:17.717 } 00:08:17.717 } 00:08:17.717 }' 00:08:17.717 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.037 BaseBdev2 00:08:18.037 BaseBdev3' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.037 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.038 [2024-10-30 09:42:56.478794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.038 "name": "Existed_Raid", 00:08:18.038 "uuid": "f8e4dcb7-8b31-42c3-912f-10d2bbdd2ead", 00:08:18.038 "strip_size_kb": 0, 00:08:18.038 "state": "online", 00:08:18.038 "raid_level": "raid1", 00:08:18.038 "superblock": true, 00:08:18.038 "num_base_bdevs": 3, 00:08:18.038 "num_base_bdevs_discovered": 2, 00:08:18.038 "num_base_bdevs_operational": 2, 00:08:18.038 "base_bdevs_list": [ 00:08:18.038 { 00:08:18.038 "name": null, 00:08:18.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.038 "is_configured": false, 00:08:18.038 "data_offset": 0, 00:08:18.038 "data_size": 63488 00:08:18.038 }, 00:08:18.038 { 00:08:18.038 "name": "BaseBdev2", 00:08:18.038 "uuid": "16403125-38ee-4312-97b5-b7d9063803d1", 00:08:18.038 "is_configured": true, 00:08:18.038 "data_offset": 2048, 00:08:18.038 "data_size": 63488 00:08:18.038 }, 00:08:18.038 { 00:08:18.038 "name": "BaseBdev3", 00:08:18.038 "uuid": "143d4c53-1613-4da7-a952-ca60508aed0b", 00:08:18.038 "is_configured": true, 00:08:18.038 "data_offset": 2048, 00:08:18.038 "data_size": 63488 00:08:18.038 } 00:08:18.038 ] 00:08:18.038 }' 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.038 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.313 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 [2024-10-30 09:42:56.887380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.573 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.573 [2024-10-30 09:42:56.985992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.573 [2024-10-30 09:42:56.986096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.573 [2024-10-30 09:42:57.046246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.573 [2024-10-30 09:42:57.046452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.573 [2024-10-30 09:42:57.046526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:18.573 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 BaseBdev2 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 [ 00:08:18.574 { 00:08:18.574 "name": "BaseBdev2", 00:08:18.574 "aliases": [ 00:08:18.574 "551ac6da-12be-4b3c-83ae-2f98ec8e7998" 00:08:18.574 ], 00:08:18.574 "product_name": "Malloc disk", 00:08:18.574 "block_size": 512, 00:08:18.574 "num_blocks": 65536, 00:08:18.574 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:18.574 "assigned_rate_limits": { 00:08:18.574 "rw_ios_per_sec": 0, 00:08:18.574 "rw_mbytes_per_sec": 0, 00:08:18.574 "r_mbytes_per_sec": 0, 00:08:18.574 "w_mbytes_per_sec": 0 00:08:18.574 }, 00:08:18.574 "claimed": false, 00:08:18.574 "zoned": false, 00:08:18.574 "supported_io_types": { 00:08:18.574 "read": true, 00:08:18.574 "write": true, 00:08:18.574 "unmap": true, 00:08:18.574 "flush": true, 00:08:18.574 "reset": true, 00:08:18.574 "nvme_admin": false, 00:08:18.574 "nvme_io": false, 00:08:18.574 "nvme_io_md": false, 00:08:18.574 "write_zeroes": true, 00:08:18.574 "zcopy": true, 00:08:18.574 "get_zone_info": false, 00:08:18.574 "zone_management": false, 00:08:18.574 "zone_append": false, 00:08:18.574 "compare": false, 00:08:18.574 "compare_and_write": false, 00:08:18.574 "abort": true, 00:08:18.574 "seek_hole": false, 00:08:18.574 "seek_data": false, 00:08:18.574 "copy": true, 00:08:18.574 "nvme_iov_md": false 00:08:18.574 }, 00:08:18.574 "memory_domains": [ 00:08:18.574 { 00:08:18.574 "dma_device_id": "system", 00:08:18.574 "dma_device_type": 1 00:08:18.574 }, 00:08:18.574 { 00:08:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.574 "dma_device_type": 2 00:08:18.574 } 00:08:18.574 ], 00:08:18.574 "driver_specific": {} 00:08:18.574 } 00:08:18.574 ] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 BaseBdev3 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.574 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.574 [ 00:08:18.574 { 00:08:18.574 "name": "BaseBdev3", 00:08:18.574 "aliases": [ 00:08:18.574 "b95adcd4-038c-413b-bbee-45b5107cc76b" 00:08:18.574 ], 00:08:18.574 "product_name": "Malloc disk", 00:08:18.574 "block_size": 512, 00:08:18.574 "num_blocks": 65536, 00:08:18.574 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:18.833 "assigned_rate_limits": { 00:08:18.833 "rw_ios_per_sec": 0, 00:08:18.833 "rw_mbytes_per_sec": 0, 00:08:18.833 "r_mbytes_per_sec": 0, 00:08:18.833 "w_mbytes_per_sec": 0 00:08:18.833 }, 00:08:18.833 "claimed": false, 00:08:18.833 "zoned": false, 00:08:18.833 "supported_io_types": { 00:08:18.833 "read": true, 00:08:18.833 "write": true, 00:08:18.833 "unmap": true, 00:08:18.833 "flush": true, 00:08:18.833 "reset": true, 00:08:18.833 "nvme_admin": false, 00:08:18.833 "nvme_io": false, 00:08:18.833 "nvme_io_md": false, 00:08:18.833 "write_zeroes": true, 00:08:18.833 "zcopy": true, 00:08:18.833 "get_zone_info": false, 00:08:18.833 "zone_management": false, 00:08:18.833 "zone_append": false, 00:08:18.833 "compare": false, 00:08:18.833 "compare_and_write": false, 00:08:18.833 "abort": true, 00:08:18.833 "seek_hole": false, 00:08:18.833 "seek_data": false, 00:08:18.833 "copy": true, 00:08:18.833 "nvme_iov_md": false 00:08:18.833 }, 00:08:18.833 "memory_domains": [ 00:08:18.833 { 00:08:18.833 "dma_device_id": "system", 00:08:18.833 "dma_device_type": 1 00:08:18.833 }, 00:08:18.833 { 00:08:18.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.833 "dma_device_type": 2 00:08:18.833 } 00:08:18.833 ], 00:08:18.833 "driver_specific": {} 00:08:18.833 } 00:08:18.833 ] 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.833 [2024-10-30 09:42:57.202116] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.833 [2024-10-30 09:42:57.202257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.833 [2024-10-30 09:42:57.202324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.833 [2024-10-30 09:42:57.204203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.833 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.833 "name": "Existed_Raid", 00:08:18.833 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:18.833 "strip_size_kb": 0, 00:08:18.833 "state": "configuring", 00:08:18.833 "raid_level": "raid1", 00:08:18.833 "superblock": true, 00:08:18.833 "num_base_bdevs": 3, 00:08:18.833 "num_base_bdevs_discovered": 2, 00:08:18.833 "num_base_bdevs_operational": 3, 00:08:18.833 "base_bdevs_list": [ 00:08:18.833 { 00:08:18.834 "name": "BaseBdev1", 00:08:18.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.834 "is_configured": false, 00:08:18.834 "data_offset": 0, 00:08:18.834 "data_size": 0 00:08:18.834 }, 00:08:18.834 { 00:08:18.834 "name": "BaseBdev2", 00:08:18.834 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:18.834 "is_configured": true, 00:08:18.834 "data_offset": 2048, 00:08:18.834 "data_size": 63488 00:08:18.834 }, 00:08:18.834 { 00:08:18.834 "name": "BaseBdev3", 00:08:18.834 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:18.834 "is_configured": true, 00:08:18.834 "data_offset": 2048, 00:08:18.834 "data_size": 63488 00:08:18.834 } 00:08:18.834 ] 00:08:18.834 }' 00:08:18.834 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.834 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.095 [2024-10-30 09:42:57.538208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.095 "name": "Existed_Raid", 00:08:19.095 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:19.095 "strip_size_kb": 0, 00:08:19.095 "state": "configuring", 00:08:19.095 "raid_level": "raid1", 00:08:19.095 "superblock": true, 00:08:19.095 "num_base_bdevs": 3, 00:08:19.095 "num_base_bdevs_discovered": 1, 00:08:19.095 "num_base_bdevs_operational": 3, 00:08:19.095 "base_bdevs_list": [ 00:08:19.095 { 00:08:19.095 "name": "BaseBdev1", 00:08:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.095 "is_configured": false, 00:08:19.095 "data_offset": 0, 00:08:19.095 "data_size": 0 00:08:19.095 }, 00:08:19.095 { 00:08:19.095 "name": null, 00:08:19.095 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:19.095 "is_configured": false, 00:08:19.095 "data_offset": 0, 00:08:19.095 "data_size": 63488 00:08:19.095 }, 00:08:19.095 { 00:08:19.095 "name": "BaseBdev3", 00:08:19.095 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:19.095 "is_configured": true, 00:08:19.095 "data_offset": 2048, 00:08:19.095 "data_size": 63488 00:08:19.095 } 00:08:19.095 ] 00:08:19.095 }' 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.095 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 [2024-10-30 09:42:57.928960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.357 BaseBdev1 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 [ 00:08:19.357 { 00:08:19.357 "name": "BaseBdev1", 00:08:19.357 "aliases": [ 00:08:19.357 "4aea410f-69c3-4a3a-8568-d5c7990475be" 00:08:19.357 ], 00:08:19.357 "product_name": "Malloc disk", 00:08:19.357 "block_size": 512, 00:08:19.357 "num_blocks": 65536, 00:08:19.357 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:19.357 "assigned_rate_limits": { 00:08:19.357 "rw_ios_per_sec": 0, 00:08:19.357 "rw_mbytes_per_sec": 0, 00:08:19.357 "r_mbytes_per_sec": 0, 00:08:19.357 "w_mbytes_per_sec": 0 00:08:19.357 }, 00:08:19.357 "claimed": true, 00:08:19.357 "claim_type": "exclusive_write", 00:08:19.357 "zoned": false, 00:08:19.357 "supported_io_types": { 00:08:19.357 "read": true, 00:08:19.357 "write": true, 00:08:19.357 "unmap": true, 00:08:19.357 "flush": true, 00:08:19.357 "reset": true, 00:08:19.357 "nvme_admin": false, 00:08:19.357 "nvme_io": false, 00:08:19.357 "nvme_io_md": false, 00:08:19.357 "write_zeroes": true, 00:08:19.357 "zcopy": true, 00:08:19.357 "get_zone_info": false, 00:08:19.357 "zone_management": false, 00:08:19.357 "zone_append": false, 00:08:19.357 "compare": false, 00:08:19.357 "compare_and_write": false, 00:08:19.357 "abort": true, 00:08:19.357 "seek_hole": false, 00:08:19.357 "seek_data": false, 00:08:19.357 "copy": true, 00:08:19.357 "nvme_iov_md": false 00:08:19.357 }, 00:08:19.357 "memory_domains": [ 00:08:19.357 { 00:08:19.357 "dma_device_id": "system", 00:08:19.357 "dma_device_type": 1 00:08:19.357 }, 00:08:19.357 { 00:08:19.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.357 "dma_device_type": 2 00:08:19.357 } 00:08:19.357 ], 00:08:19.357 "driver_specific": {} 00:08:19.357 } 00:08:19.357 ] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.357 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.619 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.619 "name": "Existed_Raid", 00:08:19.619 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:19.619 "strip_size_kb": 0, 00:08:19.619 "state": "configuring", 00:08:19.619 "raid_level": "raid1", 00:08:19.619 "superblock": true, 00:08:19.619 "num_base_bdevs": 3, 00:08:19.619 "num_base_bdevs_discovered": 2, 00:08:19.619 "num_base_bdevs_operational": 3, 00:08:19.619 "base_bdevs_list": [ 00:08:19.619 { 00:08:19.619 "name": "BaseBdev1", 00:08:19.619 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:19.619 "is_configured": true, 00:08:19.619 "data_offset": 2048, 00:08:19.619 "data_size": 63488 00:08:19.619 }, 00:08:19.619 { 00:08:19.619 "name": null, 00:08:19.619 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:19.619 "is_configured": false, 00:08:19.619 "data_offset": 0, 00:08:19.619 "data_size": 63488 00:08:19.619 }, 00:08:19.619 { 00:08:19.619 "name": "BaseBdev3", 00:08:19.619 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:19.619 "is_configured": true, 00:08:19.619 "data_offset": 2048, 00:08:19.619 "data_size": 63488 00:08:19.619 } 00:08:19.619 ] 00:08:19.619 }' 00:08:19.619 09:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.619 09:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.880 [2024-10-30 09:42:58.305101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.880 "name": "Existed_Raid", 00:08:19.880 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:19.880 "strip_size_kb": 0, 00:08:19.880 "state": "configuring", 00:08:19.880 "raid_level": "raid1", 00:08:19.880 "superblock": true, 00:08:19.880 "num_base_bdevs": 3, 00:08:19.880 "num_base_bdevs_discovered": 1, 00:08:19.880 "num_base_bdevs_operational": 3, 00:08:19.880 "base_bdevs_list": [ 00:08:19.880 { 00:08:19.880 "name": "BaseBdev1", 00:08:19.880 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:19.880 "is_configured": true, 00:08:19.880 "data_offset": 2048, 00:08:19.880 "data_size": 63488 00:08:19.880 }, 00:08:19.880 { 00:08:19.880 "name": null, 00:08:19.880 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:19.880 "is_configured": false, 00:08:19.880 "data_offset": 0, 00:08:19.880 "data_size": 63488 00:08:19.880 }, 00:08:19.880 { 00:08:19.880 "name": null, 00:08:19.880 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:19.880 "is_configured": false, 00:08:19.880 "data_offset": 0, 00:08:19.880 "data_size": 63488 00:08:19.880 } 00:08:19.880 ] 00:08:19.880 }' 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.880 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 [2024-10-30 09:42:58.645228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.141 "name": "Existed_Raid", 00:08:20.141 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:20.141 "strip_size_kb": 0, 00:08:20.141 "state": "configuring", 00:08:20.141 "raid_level": "raid1", 00:08:20.141 "superblock": true, 00:08:20.141 "num_base_bdevs": 3, 00:08:20.141 "num_base_bdevs_discovered": 2, 00:08:20.141 "num_base_bdevs_operational": 3, 00:08:20.141 "base_bdevs_list": [ 00:08:20.141 { 00:08:20.141 "name": "BaseBdev1", 00:08:20.141 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:20.141 "is_configured": true, 00:08:20.141 "data_offset": 2048, 00:08:20.141 "data_size": 63488 00:08:20.141 }, 00:08:20.141 { 00:08:20.141 "name": null, 00:08:20.141 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:20.141 "is_configured": false, 00:08:20.141 "data_offset": 0, 00:08:20.141 "data_size": 63488 00:08:20.141 }, 00:08:20.141 { 00:08:20.141 "name": "BaseBdev3", 00:08:20.141 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:20.141 "is_configured": true, 00:08:20.141 "data_offset": 2048, 00:08:20.141 "data_size": 63488 00:08:20.141 } 00:08:20.141 ] 00:08:20.141 }' 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.141 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.403 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.403 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.403 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.403 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.403 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.403 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.403 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.403 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.403 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.403 [2024-10-30 09:42:59.013304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.665 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.665 "name": "Existed_Raid", 00:08:20.665 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:20.665 "strip_size_kb": 0, 00:08:20.665 "state": "configuring", 00:08:20.665 "raid_level": "raid1", 00:08:20.665 "superblock": true, 00:08:20.665 "num_base_bdevs": 3, 00:08:20.665 "num_base_bdevs_discovered": 1, 00:08:20.665 "num_base_bdevs_operational": 3, 00:08:20.665 "base_bdevs_list": [ 00:08:20.665 { 00:08:20.665 "name": null, 00:08:20.665 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:20.665 "is_configured": false, 00:08:20.665 "data_offset": 0, 00:08:20.665 "data_size": 63488 00:08:20.665 }, 00:08:20.665 { 00:08:20.665 "name": null, 00:08:20.665 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:20.665 "is_configured": false, 00:08:20.665 "data_offset": 0, 00:08:20.665 "data_size": 63488 00:08:20.665 }, 00:08:20.665 { 00:08:20.665 "name": "BaseBdev3", 00:08:20.666 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:20.666 "is_configured": true, 00:08:20.666 "data_offset": 2048, 00:08:20.666 "data_size": 63488 00:08:20.666 } 00:08:20.666 ] 00:08:20.666 }' 00:08:20.666 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.666 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.927 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.928 [2024-10-30 09:42:59.428231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.928 "name": "Existed_Raid", 00:08:20.928 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:20.928 "strip_size_kb": 0, 00:08:20.928 "state": "configuring", 00:08:20.928 "raid_level": "raid1", 00:08:20.928 "superblock": true, 00:08:20.928 "num_base_bdevs": 3, 00:08:20.928 "num_base_bdevs_discovered": 2, 00:08:20.928 "num_base_bdevs_operational": 3, 00:08:20.928 "base_bdevs_list": [ 00:08:20.928 { 00:08:20.928 "name": null, 00:08:20.928 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:20.928 "is_configured": false, 00:08:20.928 "data_offset": 0, 00:08:20.928 "data_size": 63488 00:08:20.928 }, 00:08:20.928 { 00:08:20.928 "name": "BaseBdev2", 00:08:20.928 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:20.928 "is_configured": true, 00:08:20.928 "data_offset": 2048, 00:08:20.928 "data_size": 63488 00:08:20.928 }, 00:08:20.928 { 00:08:20.928 "name": "BaseBdev3", 00:08:20.928 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:20.928 "is_configured": true, 00:08:20.928 "data_offset": 2048, 00:08:20.928 "data_size": 63488 00:08:20.928 } 00:08:20.928 ] 00:08:20.928 }' 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.928 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aea410f-69c3-4a3a-8568-d5c7990475be 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.190 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.452 [2024-10-30 09:42:59.826844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:21.452 [2024-10-30 09:42:59.827197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.452 [2024-10-30 09:42:59.827216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.452 [2024-10-30 09:42:59.827464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:21.452 [2024-10-30 09:42:59.827595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.452 [2024-10-30 09:42:59.827606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:21.452 NewBaseBdev 00:08:21.452 [2024-10-30 09:42:59.827721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.452 [ 00:08:21.452 { 00:08:21.452 "name": "NewBaseBdev", 00:08:21.452 "aliases": [ 00:08:21.452 "4aea410f-69c3-4a3a-8568-d5c7990475be" 00:08:21.452 ], 00:08:21.452 "product_name": "Malloc disk", 00:08:21.452 "block_size": 512, 00:08:21.452 "num_blocks": 65536, 00:08:21.452 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:21.452 "assigned_rate_limits": { 00:08:21.452 "rw_ios_per_sec": 0, 00:08:21.452 "rw_mbytes_per_sec": 0, 00:08:21.452 "r_mbytes_per_sec": 0, 00:08:21.452 "w_mbytes_per_sec": 0 00:08:21.452 }, 00:08:21.452 "claimed": true, 00:08:21.452 "claim_type": "exclusive_write", 00:08:21.452 "zoned": false, 00:08:21.452 "supported_io_types": { 00:08:21.452 "read": true, 00:08:21.452 "write": true, 00:08:21.452 "unmap": true, 00:08:21.452 "flush": true, 00:08:21.452 "reset": true, 00:08:21.452 "nvme_admin": false, 00:08:21.452 "nvme_io": false, 00:08:21.452 "nvme_io_md": false, 00:08:21.452 "write_zeroes": true, 00:08:21.452 "zcopy": true, 00:08:21.452 "get_zone_info": false, 00:08:21.452 "zone_management": false, 00:08:21.452 "zone_append": false, 00:08:21.452 "compare": false, 00:08:21.452 "compare_and_write": false, 00:08:21.452 "abort": true, 00:08:21.452 "seek_hole": false, 00:08:21.452 "seek_data": false, 00:08:21.452 "copy": true, 00:08:21.452 "nvme_iov_md": false 00:08:21.452 }, 00:08:21.452 "memory_domains": [ 00:08:21.452 { 00:08:21.452 "dma_device_id": "system", 00:08:21.452 "dma_device_type": 1 00:08:21.452 }, 00:08:21.452 { 00:08:21.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.452 "dma_device_type": 2 00:08:21.452 } 00:08:21.452 ], 00:08:21.452 "driver_specific": {} 00:08:21.452 } 00:08:21.452 ] 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:21.452 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.453 "name": "Existed_Raid", 00:08:21.453 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:21.453 "strip_size_kb": 0, 00:08:21.453 "state": "online", 00:08:21.453 "raid_level": "raid1", 00:08:21.453 "superblock": true, 00:08:21.453 "num_base_bdevs": 3, 00:08:21.453 "num_base_bdevs_discovered": 3, 00:08:21.453 "num_base_bdevs_operational": 3, 00:08:21.453 "base_bdevs_list": [ 00:08:21.453 { 00:08:21.453 "name": "NewBaseBdev", 00:08:21.453 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:21.453 "is_configured": true, 00:08:21.453 "data_offset": 2048, 00:08:21.453 "data_size": 63488 00:08:21.453 }, 00:08:21.453 { 00:08:21.453 "name": "BaseBdev2", 00:08:21.453 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:21.453 "is_configured": true, 00:08:21.453 "data_offset": 2048, 00:08:21.453 "data_size": 63488 00:08:21.453 }, 00:08:21.453 { 00:08:21.453 "name": "BaseBdev3", 00:08:21.453 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:21.453 "is_configured": true, 00:08:21.453 "data_offset": 2048, 00:08:21.453 "data_size": 63488 00:08:21.453 } 00:08:21.453 ] 00:08:21.453 }' 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.453 09:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.714 [2024-10-30 09:43:00.211322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.714 "name": "Existed_Raid", 00:08:21.714 "aliases": [ 00:08:21.714 "cddb1751-8698-424a-96ec-8dc1801dae1b" 00:08:21.714 ], 00:08:21.714 "product_name": "Raid Volume", 00:08:21.714 "block_size": 512, 00:08:21.714 "num_blocks": 63488, 00:08:21.714 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:21.714 "assigned_rate_limits": { 00:08:21.714 "rw_ios_per_sec": 0, 00:08:21.714 "rw_mbytes_per_sec": 0, 00:08:21.714 "r_mbytes_per_sec": 0, 00:08:21.714 "w_mbytes_per_sec": 0 00:08:21.714 }, 00:08:21.714 "claimed": false, 00:08:21.714 "zoned": false, 00:08:21.714 "supported_io_types": { 00:08:21.714 "read": true, 00:08:21.714 "write": true, 00:08:21.714 "unmap": false, 00:08:21.714 "flush": false, 00:08:21.714 "reset": true, 00:08:21.714 "nvme_admin": false, 00:08:21.714 "nvme_io": false, 00:08:21.714 "nvme_io_md": false, 00:08:21.714 "write_zeroes": true, 00:08:21.714 "zcopy": false, 00:08:21.714 "get_zone_info": false, 00:08:21.714 "zone_management": false, 00:08:21.714 "zone_append": false, 00:08:21.714 "compare": false, 00:08:21.714 "compare_and_write": false, 00:08:21.714 "abort": false, 00:08:21.714 "seek_hole": false, 00:08:21.714 "seek_data": false, 00:08:21.714 "copy": false, 00:08:21.714 "nvme_iov_md": false 00:08:21.714 }, 00:08:21.714 "memory_domains": [ 00:08:21.714 { 00:08:21.714 "dma_device_id": "system", 00:08:21.714 "dma_device_type": 1 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.714 "dma_device_type": 2 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "dma_device_id": "system", 00:08:21.714 "dma_device_type": 1 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.714 "dma_device_type": 2 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "dma_device_id": "system", 00:08:21.714 "dma_device_type": 1 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.714 "dma_device_type": 2 00:08:21.714 } 00:08:21.714 ], 00:08:21.714 "driver_specific": { 00:08:21.714 "raid": { 00:08:21.714 "uuid": "cddb1751-8698-424a-96ec-8dc1801dae1b", 00:08:21.714 "strip_size_kb": 0, 00:08:21.714 "state": "online", 00:08:21.714 "raid_level": "raid1", 00:08:21.714 "superblock": true, 00:08:21.714 "num_base_bdevs": 3, 00:08:21.714 "num_base_bdevs_discovered": 3, 00:08:21.714 "num_base_bdevs_operational": 3, 00:08:21.714 "base_bdevs_list": [ 00:08:21.714 { 00:08:21.714 "name": "NewBaseBdev", 00:08:21.714 "uuid": "4aea410f-69c3-4a3a-8568-d5c7990475be", 00:08:21.714 "is_configured": true, 00:08:21.714 "data_offset": 2048, 00:08:21.714 "data_size": 63488 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "name": "BaseBdev2", 00:08:21.714 "uuid": "551ac6da-12be-4b3c-83ae-2f98ec8e7998", 00:08:21.714 "is_configured": true, 00:08:21.714 "data_offset": 2048, 00:08:21.714 "data_size": 63488 00:08:21.714 }, 00:08:21.714 { 00:08:21.714 "name": "BaseBdev3", 00:08:21.714 "uuid": "b95adcd4-038c-413b-bbee-45b5107cc76b", 00:08:21.714 "is_configured": true, 00:08:21.714 "data_offset": 2048, 00:08:21.714 "data_size": 63488 00:08:21.714 } 00:08:21.714 ] 00:08:21.714 } 00:08:21.714 } 00:08:21.714 }' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:21.714 BaseBdev2 00:08:21.714 BaseBdev3' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.714 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.976 [2024-10-30 09:43:00.419036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.976 [2024-10-30 09:43:00.419075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.976 [2024-10-30 09:43:00.419141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.976 [2024-10-30 09:43:00.419414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.976 [2024-10-30 09:43:00.419424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66479 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66479 ']' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66479 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66479 00:08:21.976 killing process with pid 66479 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66479' 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66479 00:08:21.976 [2024-10-30 09:43:00.452990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.976 09:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66479 00:08:22.237 [2024-10-30 09:43:00.640980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.808 09:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:22.808 00:08:22.808 real 0m7.812s 00:08:22.808 user 0m12.461s 00:08:22.808 sys 0m1.230s 00:08:22.808 09:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.808 09:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.808 ************************************ 00:08:22.808 END TEST raid_state_function_test_sb 00:08:22.808 ************************************ 00:08:22.808 09:43:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:22.808 09:43:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:22.808 09:43:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.808 09:43:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.808 ************************************ 00:08:22.808 START TEST raid_superblock_test 00:08:22.808 ************************************ 00:08:22.808 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67072 00:08:22.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67072 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67072 ']' 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:22.809 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.069 [2024-10-30 09:43:01.489909] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:23.069 [2024-10-30 09:43:01.490038] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67072 ] 00:08:23.069 [2024-10-30 09:43:01.648210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.329 [2024-10-30 09:43:01.749121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.329 [2024-10-30 09:43:01.885333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.329 [2024-10-30 09:43:01.885362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:23.900 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 malloc1 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 [2024-10-30 09:43:02.370891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.901 [2024-10-30 09:43:02.370954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.901 [2024-10-30 09:43:02.370977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:23.901 [2024-10-30 09:43:02.370986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.901 [2024-10-30 09:43:02.373175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.901 [2024-10-30 09:43:02.373315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.901 pt1 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 malloc2 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 [2024-10-30 09:43:02.415026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.901 [2024-10-30 09:43:02.415095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.901 [2024-10-30 09:43:02.415117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:23.901 [2024-10-30 09:43:02.415126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.901 [2024-10-30 09:43:02.417242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.901 [2024-10-30 09:43:02.417275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.901 pt2 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 malloc3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 [2024-10-30 09:43:02.468940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:23.901 [2024-10-30 09:43:02.469113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.901 [2024-10-30 09:43:02.469159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:23.901 [2024-10-30 09:43:02.469595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.901 [2024-10-30 09:43:02.471788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.901 [2024-10-30 09:43:02.471904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:23.901 pt3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 [2024-10-30 09:43:02.476988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.901 [2024-10-30 09:43:02.478831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.901 [2024-10-30 09:43:02.478986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:23.901 [2024-10-30 09:43:02.479167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:23.901 [2024-10-30 09:43:02.479186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.901 [2024-10-30 09:43:02.479439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:23.901 [2024-10-30 09:43:02.479590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:23.901 [2024-10-30 09:43:02.479601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:23.901 [2024-10-30 09:43:02.479748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.901 "name": "raid_bdev1", 00:08:23.901 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:23.901 "strip_size_kb": 0, 00:08:23.901 "state": "online", 00:08:23.901 "raid_level": "raid1", 00:08:23.901 "superblock": true, 00:08:23.901 "num_base_bdevs": 3, 00:08:23.901 "num_base_bdevs_discovered": 3, 00:08:23.901 "num_base_bdevs_operational": 3, 00:08:23.901 "base_bdevs_list": [ 00:08:23.901 { 00:08:23.901 "name": "pt1", 00:08:23.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.901 "is_configured": true, 00:08:23.901 "data_offset": 2048, 00:08:23.901 "data_size": 63488 00:08:23.901 }, 00:08:23.901 { 00:08:23.901 "name": "pt2", 00:08:23.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.901 "is_configured": true, 00:08:23.901 "data_offset": 2048, 00:08:23.901 "data_size": 63488 00:08:23.901 }, 00:08:23.901 { 00:08:23.901 "name": "pt3", 00:08:23.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.901 "is_configured": true, 00:08:23.901 "data_offset": 2048, 00:08:23.901 "data_size": 63488 00:08:23.901 } 00:08:23.901 ] 00:08:23.901 }' 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.901 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.504 [2024-10-30 09:43:02.821373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.504 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.504 "name": "raid_bdev1", 00:08:24.504 "aliases": [ 00:08:24.504 "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b" 00:08:24.504 ], 00:08:24.504 "product_name": "Raid Volume", 00:08:24.504 "block_size": 512, 00:08:24.504 "num_blocks": 63488, 00:08:24.504 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:24.504 "assigned_rate_limits": { 00:08:24.504 "rw_ios_per_sec": 0, 00:08:24.504 "rw_mbytes_per_sec": 0, 00:08:24.504 "r_mbytes_per_sec": 0, 00:08:24.504 "w_mbytes_per_sec": 0 00:08:24.504 }, 00:08:24.504 "claimed": false, 00:08:24.504 "zoned": false, 00:08:24.504 "supported_io_types": { 00:08:24.504 "read": true, 00:08:24.504 "write": true, 00:08:24.504 "unmap": false, 00:08:24.504 "flush": false, 00:08:24.504 "reset": true, 00:08:24.504 "nvme_admin": false, 00:08:24.504 "nvme_io": false, 00:08:24.504 "nvme_io_md": false, 00:08:24.504 "write_zeroes": true, 00:08:24.504 "zcopy": false, 00:08:24.504 "get_zone_info": false, 00:08:24.504 "zone_management": false, 00:08:24.504 "zone_append": false, 00:08:24.504 "compare": false, 00:08:24.504 "compare_and_write": false, 00:08:24.504 "abort": false, 00:08:24.504 "seek_hole": false, 00:08:24.504 "seek_data": false, 00:08:24.504 "copy": false, 00:08:24.504 "nvme_iov_md": false 00:08:24.504 }, 00:08:24.504 "memory_domains": [ 00:08:24.504 { 00:08:24.504 "dma_device_id": "system", 00:08:24.504 "dma_device_type": 1 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.504 "dma_device_type": 2 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "dma_device_id": "system", 00:08:24.504 "dma_device_type": 1 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.504 "dma_device_type": 2 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "dma_device_id": "system", 00:08:24.504 "dma_device_type": 1 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.504 "dma_device_type": 2 00:08:24.504 } 00:08:24.504 ], 00:08:24.504 "driver_specific": { 00:08:24.504 "raid": { 00:08:24.504 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:24.504 "strip_size_kb": 0, 00:08:24.504 "state": "online", 00:08:24.504 "raid_level": "raid1", 00:08:24.504 "superblock": true, 00:08:24.504 "num_base_bdevs": 3, 00:08:24.504 "num_base_bdevs_discovered": 3, 00:08:24.504 "num_base_bdevs_operational": 3, 00:08:24.504 "base_bdevs_list": [ 00:08:24.504 { 00:08:24.504 "name": "pt1", 00:08:24.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.504 "is_configured": true, 00:08:24.504 "data_offset": 2048, 00:08:24.504 "data_size": 63488 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "name": "pt2", 00:08:24.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.504 "is_configured": true, 00:08:24.504 "data_offset": 2048, 00:08:24.504 "data_size": 63488 00:08:24.504 }, 00:08:24.504 { 00:08:24.504 "name": "pt3", 00:08:24.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.504 "is_configured": true, 00:08:24.505 "data_offset": 2048, 00:08:24.505 "data_size": 63488 00:08:24.505 } 00:08:24.505 ] 00:08:24.505 } 00:08:24.505 } 00:08:24.505 }' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:24.505 pt2 00:08:24.505 pt3' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 [2024-10-30 09:43:03.017381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cc5c9329-14b6-4a1a-b78a-06538ffc4f4b 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cc5c9329-14b6-4a1a-b78a-06538ffc4f4b ']' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 [2024-10-30 09:43:03.049083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.505 [2024-10-30 09:43:03.049106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.505 [2024-10-30 09:43:03.049176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.505 [2024-10-30 09:43:03.049252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.505 [2024-10-30 09:43:03.049262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.505 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 [2024-10-30 09:43:03.157158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:24.766 [2024-10-30 09:43:03.159080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:24.766 [2024-10-30 09:43:03.159131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:24.766 [2024-10-30 09:43:03.159177] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:24.766 [2024-10-30 09:43:03.159226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:24.766 [2024-10-30 09:43:03.159245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:24.766 [2024-10-30 09:43:03.159261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.766 [2024-10-30 09:43:03.159270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:24.766 request: 00:08:24.766 { 00:08:24.766 "name": "raid_bdev1", 00:08:24.766 "raid_level": "raid1", 00:08:24.766 "base_bdevs": [ 00:08:24.766 "malloc1", 00:08:24.766 "malloc2", 00:08:24.766 "malloc3" 00:08:24.766 ], 00:08:24.766 "superblock": false, 00:08:24.766 "method": "bdev_raid_create", 00:08:24.766 "req_id": 1 00:08:24.766 } 00:08:24.766 Got JSON-RPC error response 00:08:24.766 response: 00:08:24.766 { 00:08:24.766 "code": -17, 00:08:24.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:24.766 } 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 [2024-10-30 09:43:03.201121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.766 [2024-10-30 09:43:03.201171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.766 [2024-10-30 09:43:03.201191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:24.766 [2024-10-30 09:43:03.201201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.766 [2024-10-30 09:43:03.203354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.766 [2024-10-30 09:43:03.203387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.766 [2024-10-30 09:43:03.203464] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.766 [2024-10-30 09:43:03.203509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.766 pt1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.766 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.767 "name": "raid_bdev1", 00:08:24.767 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:24.767 "strip_size_kb": 0, 00:08:24.767 "state": "configuring", 00:08:24.767 "raid_level": "raid1", 00:08:24.767 "superblock": true, 00:08:24.767 "num_base_bdevs": 3, 00:08:24.767 "num_base_bdevs_discovered": 1, 00:08:24.767 "num_base_bdevs_operational": 3, 00:08:24.767 "base_bdevs_list": [ 00:08:24.767 { 00:08:24.767 "name": "pt1", 00:08:24.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.767 "is_configured": true, 00:08:24.767 "data_offset": 2048, 00:08:24.767 "data_size": 63488 00:08:24.767 }, 00:08:24.767 { 00:08:24.767 "name": null, 00:08:24.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.767 "is_configured": false, 00:08:24.767 "data_offset": 2048, 00:08:24.767 "data_size": 63488 00:08:24.767 }, 00:08:24.767 { 00:08:24.767 "name": null, 00:08:24.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.767 "is_configured": false, 00:08:24.767 "data_offset": 2048, 00:08:24.767 "data_size": 63488 00:08:24.767 } 00:08:24.767 ] 00:08:24.767 }' 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.767 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 [2024-10-30 09:43:03.521205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.028 [2024-10-30 09:43:03.521259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.028 [2024-10-30 09:43:03.521280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:25.028 [2024-10-30 09:43:03.521289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.028 [2024-10-30 09:43:03.521675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.028 [2024-10-30 09:43:03.521688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.028 [2024-10-30 09:43:03.521760] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.028 [2024-10-30 09:43:03.521779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.028 pt2 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 [2024-10-30 09:43:03.529212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.028 "name": "raid_bdev1", 00:08:25.028 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:25.028 "strip_size_kb": 0, 00:08:25.028 "state": "configuring", 00:08:25.028 "raid_level": "raid1", 00:08:25.028 "superblock": true, 00:08:25.028 "num_base_bdevs": 3, 00:08:25.028 "num_base_bdevs_discovered": 1, 00:08:25.028 "num_base_bdevs_operational": 3, 00:08:25.028 "base_bdevs_list": [ 00:08:25.028 { 00:08:25.028 "name": "pt1", 00:08:25.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.028 "is_configured": true, 00:08:25.028 "data_offset": 2048, 00:08:25.028 "data_size": 63488 00:08:25.028 }, 00:08:25.028 { 00:08:25.028 "name": null, 00:08:25.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.028 "is_configured": false, 00:08:25.028 "data_offset": 0, 00:08:25.028 "data_size": 63488 00:08:25.028 }, 00:08:25.028 { 00:08:25.028 "name": null, 00:08:25.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.028 "is_configured": false, 00:08:25.028 "data_offset": 2048, 00:08:25.028 "data_size": 63488 00:08:25.028 } 00:08:25.028 ] 00:08:25.028 }' 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.028 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.381 [2024-10-30 09:43:03.841271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.381 [2024-10-30 09:43:03.841329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.381 [2024-10-30 09:43:03.841344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:25.381 [2024-10-30 09:43:03.841354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.381 [2024-10-30 09:43:03.841754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.381 [2024-10-30 09:43:03.841769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.381 [2024-10-30 09:43:03.841836] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.381 [2024-10-30 09:43:03.841866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.381 pt2 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.381 [2024-10-30 09:43:03.849253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:25.381 [2024-10-30 09:43:03.849295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.381 [2024-10-30 09:43:03.849311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:25.381 [2024-10-30 09:43:03.849320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.381 [2024-10-30 09:43:03.849666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.381 [2024-10-30 09:43:03.849688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:25.381 [2024-10-30 09:43:03.849742] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:25.381 [2024-10-30 09:43:03.849760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:25.381 [2024-10-30 09:43:03.849873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.381 [2024-10-30 09:43:03.849886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:25.381 [2024-10-30 09:43:03.850118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:25.381 [2024-10-30 09:43:03.850256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.381 [2024-10-30 09:43:03.850264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.381 [2024-10-30 09:43:03.850388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.381 pt3 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.381 "name": "raid_bdev1", 00:08:25.381 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:25.381 "strip_size_kb": 0, 00:08:25.381 "state": "online", 00:08:25.381 "raid_level": "raid1", 00:08:25.381 "superblock": true, 00:08:25.381 "num_base_bdevs": 3, 00:08:25.381 "num_base_bdevs_discovered": 3, 00:08:25.381 "num_base_bdevs_operational": 3, 00:08:25.381 "base_bdevs_list": [ 00:08:25.381 { 00:08:25.381 "name": "pt1", 00:08:25.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.381 "is_configured": true, 00:08:25.381 "data_offset": 2048, 00:08:25.381 "data_size": 63488 00:08:25.381 }, 00:08:25.381 { 00:08:25.381 "name": "pt2", 00:08:25.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.381 "is_configured": true, 00:08:25.381 "data_offset": 2048, 00:08:25.381 "data_size": 63488 00:08:25.381 }, 00:08:25.381 { 00:08:25.381 "name": "pt3", 00:08:25.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.381 "is_configured": true, 00:08:25.381 "data_offset": 2048, 00:08:25.381 "data_size": 63488 00:08:25.381 } 00:08:25.381 ] 00:08:25.381 }' 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.381 09:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.642 [2024-10-30 09:43:04.177662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.642 "name": "raid_bdev1", 00:08:25.642 "aliases": [ 00:08:25.642 "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b" 00:08:25.642 ], 00:08:25.642 "product_name": "Raid Volume", 00:08:25.642 "block_size": 512, 00:08:25.642 "num_blocks": 63488, 00:08:25.642 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:25.642 "assigned_rate_limits": { 00:08:25.642 "rw_ios_per_sec": 0, 00:08:25.642 "rw_mbytes_per_sec": 0, 00:08:25.642 "r_mbytes_per_sec": 0, 00:08:25.642 "w_mbytes_per_sec": 0 00:08:25.642 }, 00:08:25.642 "claimed": false, 00:08:25.642 "zoned": false, 00:08:25.642 "supported_io_types": { 00:08:25.642 "read": true, 00:08:25.642 "write": true, 00:08:25.642 "unmap": false, 00:08:25.642 "flush": false, 00:08:25.642 "reset": true, 00:08:25.642 "nvme_admin": false, 00:08:25.642 "nvme_io": false, 00:08:25.642 "nvme_io_md": false, 00:08:25.642 "write_zeroes": true, 00:08:25.642 "zcopy": false, 00:08:25.642 "get_zone_info": false, 00:08:25.642 "zone_management": false, 00:08:25.642 "zone_append": false, 00:08:25.642 "compare": false, 00:08:25.642 "compare_and_write": false, 00:08:25.642 "abort": false, 00:08:25.642 "seek_hole": false, 00:08:25.642 "seek_data": false, 00:08:25.642 "copy": false, 00:08:25.642 "nvme_iov_md": false 00:08:25.642 }, 00:08:25.642 "memory_domains": [ 00:08:25.642 { 00:08:25.642 "dma_device_id": "system", 00:08:25.642 "dma_device_type": 1 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.642 "dma_device_type": 2 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "dma_device_id": "system", 00:08:25.642 "dma_device_type": 1 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.642 "dma_device_type": 2 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "dma_device_id": "system", 00:08:25.642 "dma_device_type": 1 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.642 "dma_device_type": 2 00:08:25.642 } 00:08:25.642 ], 00:08:25.642 "driver_specific": { 00:08:25.642 "raid": { 00:08:25.642 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:25.642 "strip_size_kb": 0, 00:08:25.642 "state": "online", 00:08:25.642 "raid_level": "raid1", 00:08:25.642 "superblock": true, 00:08:25.642 "num_base_bdevs": 3, 00:08:25.642 "num_base_bdevs_discovered": 3, 00:08:25.642 "num_base_bdevs_operational": 3, 00:08:25.642 "base_bdevs_list": [ 00:08:25.642 { 00:08:25.642 "name": "pt1", 00:08:25.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.642 "is_configured": true, 00:08:25.642 "data_offset": 2048, 00:08:25.642 "data_size": 63488 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "name": "pt2", 00:08:25.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.642 "is_configured": true, 00:08:25.642 "data_offset": 2048, 00:08:25.642 "data_size": 63488 00:08:25.642 }, 00:08:25.642 { 00:08:25.642 "name": "pt3", 00:08:25.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.642 "is_configured": true, 00:08:25.642 "data_offset": 2048, 00:08:25.642 "data_size": 63488 00:08:25.642 } 00:08:25.642 ] 00:08:25.642 } 00:08:25.642 } 00:08:25.642 }' 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.642 pt2 00:08:25.642 pt3' 00:08:25.642 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.902 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.903 [2024-10-30 09:43:04.385681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cc5c9329-14b6-4a1a-b78a-06538ffc4f4b '!=' cc5c9329-14b6-4a1a-b78a-06538ffc4f4b ']' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.903 [2024-10-30 09:43:04.409432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.903 "name": "raid_bdev1", 00:08:25.903 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:25.903 "strip_size_kb": 0, 00:08:25.903 "state": "online", 00:08:25.903 "raid_level": "raid1", 00:08:25.903 "superblock": true, 00:08:25.903 "num_base_bdevs": 3, 00:08:25.903 "num_base_bdevs_discovered": 2, 00:08:25.903 "num_base_bdevs_operational": 2, 00:08:25.903 "base_bdevs_list": [ 00:08:25.903 { 00:08:25.903 "name": null, 00:08:25.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.903 "is_configured": false, 00:08:25.903 "data_offset": 0, 00:08:25.903 "data_size": 63488 00:08:25.903 }, 00:08:25.903 { 00:08:25.903 "name": "pt2", 00:08:25.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.903 "is_configured": true, 00:08:25.903 "data_offset": 2048, 00:08:25.903 "data_size": 63488 00:08:25.903 }, 00:08:25.903 { 00:08:25.903 "name": "pt3", 00:08:25.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.903 "is_configured": true, 00:08:25.903 "data_offset": 2048, 00:08:25.903 "data_size": 63488 00:08:25.903 } 00:08:25.903 ] 00:08:25.903 }' 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.903 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.163 [2024-10-30 09:43:04.745468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.163 [2024-10-30 09:43:04.745492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.163 [2024-10-30 09:43:04.745559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.163 [2024-10-30 09:43:04.745618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.163 [2024-10-30 09:43:04.745631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.163 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.424 [2024-10-30 09:43:04.805478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.424 [2024-10-30 09:43:04.805530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.424 [2024-10-30 09:43:04.805548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:08:26.424 [2024-10-30 09:43:04.805559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.424 [2024-10-30 09:43:04.807744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.424 [2024-10-30 09:43:04.807870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.424 [2024-10-30 09:43:04.807955] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.424 [2024-10-30 09:43:04.808000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.424 pt2 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.424 "name": "raid_bdev1", 00:08:26.424 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:26.424 "strip_size_kb": 0, 00:08:26.424 "state": "configuring", 00:08:26.424 "raid_level": "raid1", 00:08:26.424 "superblock": true, 00:08:26.424 "num_base_bdevs": 3, 00:08:26.424 "num_base_bdevs_discovered": 1, 00:08:26.424 "num_base_bdevs_operational": 2, 00:08:26.424 "base_bdevs_list": [ 00:08:26.424 { 00:08:26.424 "name": null, 00:08:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.424 "is_configured": false, 00:08:26.424 "data_offset": 2048, 00:08:26.424 "data_size": 63488 00:08:26.424 }, 00:08:26.424 { 00:08:26.424 "name": "pt2", 00:08:26.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.424 "is_configured": true, 00:08:26.424 "data_offset": 2048, 00:08:26.424 "data_size": 63488 00:08:26.424 }, 00:08:26.424 { 00:08:26.424 "name": null, 00:08:26.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.424 "is_configured": false, 00:08:26.424 "data_offset": 2048, 00:08:26.424 "data_size": 63488 00:08:26.424 } 00:08:26.424 ] 00:08:26.424 }' 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.424 09:43:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 [2024-10-30 09:43:05.121563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:26.686 [2024-10-30 09:43:05.121623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.686 [2024-10-30 09:43:05.121641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:26.686 [2024-10-30 09:43:05.121652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.686 [2024-10-30 09:43:05.122071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.686 [2024-10-30 09:43:05.122094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:26.686 [2024-10-30 09:43:05.122173] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:26.686 [2024-10-30 09:43:05.122198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:26.686 [2024-10-30 09:43:05.122303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.686 [2024-10-30 09:43:05.122319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.686 [2024-10-30 09:43:05.122566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:26.686 [2024-10-30 09:43:05.122695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.686 [2024-10-30 09:43:05.122797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:26.686 [2024-10-30 09:43:05.122932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.686 pt3 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.686 "name": "raid_bdev1", 00:08:26.686 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:26.686 "strip_size_kb": 0, 00:08:26.686 "state": "online", 00:08:26.686 "raid_level": "raid1", 00:08:26.686 "superblock": true, 00:08:26.686 "num_base_bdevs": 3, 00:08:26.686 "num_base_bdevs_discovered": 2, 00:08:26.686 "num_base_bdevs_operational": 2, 00:08:26.686 "base_bdevs_list": [ 00:08:26.686 { 00:08:26.686 "name": null, 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.686 "is_configured": false, 00:08:26.686 "data_offset": 2048, 00:08:26.686 "data_size": 63488 00:08:26.686 }, 00:08:26.686 { 00:08:26.686 "name": "pt2", 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.686 "is_configured": true, 00:08:26.686 "data_offset": 2048, 00:08:26.686 "data_size": 63488 00:08:26.686 }, 00:08:26.686 { 00:08:26.686 "name": "pt3", 00:08:26.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.686 "is_configured": true, 00:08:26.686 "data_offset": 2048, 00:08:26.686 "data_size": 63488 00:08:26.686 } 00:08:26.686 ] 00:08:26.686 }' 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.686 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 [2024-10-30 09:43:05.449616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.949 [2024-10-30 09:43:05.449739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.949 [2024-10-30 09:43:05.449813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.949 [2024-10-30 09:43:05.449873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.949 [2024-10-30 09:43:05.449882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 [2024-10-30 09:43:05.501649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.949 [2024-10-30 09:43:05.501699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.949 [2024-10-30 09:43:05.501717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:26.949 [2024-10-30 09:43:05.501726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.949 [2024-10-30 09:43:05.503900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.949 [2024-10-30 09:43:05.503934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.949 [2024-10-30 09:43:05.504006] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:26.949 [2024-10-30 09:43:05.504043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.949 [2024-10-30 09:43:05.504167] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:26.949 [2024-10-30 09:43:05.504178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.949 [2024-10-30 09:43:05.504193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:26.949 [2024-10-30 09:43:05.504239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.949 pt1 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.949 "name": "raid_bdev1", 00:08:26.949 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:26.949 "strip_size_kb": 0, 00:08:26.949 "state": "configuring", 00:08:26.949 "raid_level": "raid1", 00:08:26.949 "superblock": true, 00:08:26.949 "num_base_bdevs": 3, 00:08:26.949 "num_base_bdevs_discovered": 1, 00:08:26.949 "num_base_bdevs_operational": 2, 00:08:26.949 "base_bdevs_list": [ 00:08:26.949 { 00:08:26.949 "name": null, 00:08:26.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.949 "is_configured": false, 00:08:26.949 "data_offset": 2048, 00:08:26.949 "data_size": 63488 00:08:26.949 }, 00:08:26.949 { 00:08:26.949 "name": "pt2", 00:08:26.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.949 "is_configured": true, 00:08:26.949 "data_offset": 2048, 00:08:26.949 "data_size": 63488 00:08:26.949 }, 00:08:26.949 { 00:08:26.949 "name": null, 00:08:26.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.949 "is_configured": false, 00:08:26.949 "data_offset": 2048, 00:08:26.949 "data_size": 63488 00:08:26.949 } 00:08:26.949 ] 00:08:26.949 }' 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.949 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 [2024-10-30 09:43:05.897751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:27.521 [2024-10-30 09:43:05.897804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.521 [2024-10-30 09:43:05.897822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:08:27.521 [2024-10-30 09:43:05.897831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.521 [2024-10-30 09:43:05.898243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.521 [2024-10-30 09:43:05.898258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:27.521 [2024-10-30 09:43:05.898326] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:27.521 [2024-10-30 09:43:05.898362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:27.521 [2024-10-30 09:43:05.898472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:27.521 [2024-10-30 09:43:05.898481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.521 [2024-10-30 09:43:05.898716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:27.521 [2024-10-30 09:43:05.898852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:27.521 [2024-10-30 09:43:05.898862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:27.521 [2024-10-30 09:43:05.898986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.521 pt3 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.521 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.522 "name": "raid_bdev1", 00:08:27.522 "uuid": "cc5c9329-14b6-4a1a-b78a-06538ffc4f4b", 00:08:27.522 "strip_size_kb": 0, 00:08:27.522 "state": "online", 00:08:27.522 "raid_level": "raid1", 00:08:27.522 "superblock": true, 00:08:27.522 "num_base_bdevs": 3, 00:08:27.522 "num_base_bdevs_discovered": 2, 00:08:27.522 "num_base_bdevs_operational": 2, 00:08:27.522 "base_bdevs_list": [ 00:08:27.522 { 00:08:27.522 "name": null, 00:08:27.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.522 "is_configured": false, 00:08:27.522 "data_offset": 2048, 00:08:27.522 "data_size": 63488 00:08:27.522 }, 00:08:27.522 { 00:08:27.522 "name": "pt2", 00:08:27.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.522 "is_configured": true, 00:08:27.522 "data_offset": 2048, 00:08:27.522 "data_size": 63488 00:08:27.522 }, 00:08:27.522 { 00:08:27.522 "name": "pt3", 00:08:27.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:27.522 "is_configured": true, 00:08:27.522 "data_offset": 2048, 00:08:27.522 "data_size": 63488 00:08:27.522 } 00:08:27.522 ] 00:08:27.522 }' 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.522 09:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:27.784 [2024-10-30 09:43:06.246129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cc5c9329-14b6-4a1a-b78a-06538ffc4f4b '!=' cc5c9329-14b6-4a1a-b78a-06538ffc4f4b ']' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67072 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67072 ']' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67072 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67072 00:08:27.784 killing process with pid 67072 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67072' 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67072 00:08:27.784 [2024-10-30 09:43:06.309772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.784 [2024-10-30 09:43:06.309863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.784 09:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67072 00:08:27.784 [2024-10-30 09:43:06.309924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.784 [2024-10-30 09:43:06.309936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:28.045 [2024-10-30 09:43:06.494236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.614 09:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:28.614 00:08:28.614 real 0m5.762s 00:08:28.614 user 0m9.052s 00:08:28.614 sys 0m0.895s 00:08:28.614 09:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.614 ************************************ 00:08:28.614 END TEST raid_superblock_test 00:08:28.614 ************************************ 00:08:28.614 09:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 09:43:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:28.614 09:43:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:28.614 09:43:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.614 09:43:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.876 ************************************ 00:08:28.876 START TEST raid_read_error_test 00:08:28.876 ************************************ 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Tq7eYUpgCy 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67501 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67501 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67501 ']' 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.876 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:28.876 [2024-10-30 09:43:07.319496] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:28.876 [2024-10-30 09:43:07.319774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67501 ] 00:08:28.876 [2024-10-30 09:43:07.480125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.138 [2024-10-30 09:43:07.581879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.138 [2024-10-30 09:43:07.717226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.138 [2024-10-30 09:43:07.717262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 BaseBdev1_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 true 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 [2024-10-30 09:43:08.202279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.711 [2024-10-30 09:43:08.202336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.711 [2024-10-30 09:43:08.202356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:29.711 [2024-10-30 09:43:08.202368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.711 [2024-10-30 09:43:08.204510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.711 [2024-10-30 09:43:08.204548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.711 BaseBdev1 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 BaseBdev2_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 true 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 [2024-10-30 09:43:08.246487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.711 [2024-10-30 09:43:08.246537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.711 [2024-10-30 09:43:08.246552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.711 [2024-10-30 09:43:08.246562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.711 [2024-10-30 09:43:08.248684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.711 [2024-10-30 09:43:08.248722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.711 BaseBdev2 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 BaseBdev3_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 true 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 [2024-10-30 09:43:08.300204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:29.711 [2024-10-30 09:43:08.300256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.711 [2024-10-30 09:43:08.300274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.711 [2024-10-30 09:43:08.300285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.711 [2024-10-30 09:43:08.302419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.711 [2024-10-30 09:43:08.302455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:29.711 BaseBdev3 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.711 [2024-10-30 09:43:08.308264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.711 [2024-10-30 09:43:08.310095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.711 [2024-10-30 09:43:08.310169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.711 [2024-10-30 09:43:08.310364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.711 [2024-10-30 09:43:08.310374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.711 [2024-10-30 09:43:08.310616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.711 [2024-10-30 09:43:08.310763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.711 [2024-10-30 09:43:08.310773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:29.711 [2024-10-30 09:43:08.310904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.711 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.971 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.971 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.971 "name": "raid_bdev1", 00:08:29.971 "uuid": "6f181967-80a6-4caa-b16e-0f916bee10d3", 00:08:29.971 "strip_size_kb": 0, 00:08:29.971 "state": "online", 00:08:29.971 "raid_level": "raid1", 00:08:29.971 "superblock": true, 00:08:29.971 "num_base_bdevs": 3, 00:08:29.971 "num_base_bdevs_discovered": 3, 00:08:29.971 "num_base_bdevs_operational": 3, 00:08:29.971 "base_bdevs_list": [ 00:08:29.971 { 00:08:29.971 "name": "BaseBdev1", 00:08:29.971 "uuid": "b2bc0299-aede-5900-ad92-0fd8131bf6bb", 00:08:29.971 "is_configured": true, 00:08:29.971 "data_offset": 2048, 00:08:29.971 "data_size": 63488 00:08:29.971 }, 00:08:29.971 { 00:08:29.971 "name": "BaseBdev2", 00:08:29.971 "uuid": "45e5e1c6-0faa-5b38-9da8-3548578a3035", 00:08:29.971 "is_configured": true, 00:08:29.971 "data_offset": 2048, 00:08:29.971 "data_size": 63488 00:08:29.971 }, 00:08:29.971 { 00:08:29.971 "name": "BaseBdev3", 00:08:29.971 "uuid": "d5ab9c20-58a6-5c90-b0ae-15c02f847b4a", 00:08:29.971 "is_configured": true, 00:08:29.971 "data_offset": 2048, 00:08:29.971 "data_size": 63488 00:08:29.971 } 00:08:29.971 ] 00:08:29.971 }' 00:08:29.971 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.971 09:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.230 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.230 09:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.230 [2024-10-30 09:43:08.757367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:31.167 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.168 "name": "raid_bdev1", 00:08:31.168 "uuid": "6f181967-80a6-4caa-b16e-0f916bee10d3", 00:08:31.168 "strip_size_kb": 0, 00:08:31.168 "state": "online", 00:08:31.168 "raid_level": "raid1", 00:08:31.168 "superblock": true, 00:08:31.168 "num_base_bdevs": 3, 00:08:31.168 "num_base_bdevs_discovered": 3, 00:08:31.168 "num_base_bdevs_operational": 3, 00:08:31.168 "base_bdevs_list": [ 00:08:31.168 { 00:08:31.168 "name": "BaseBdev1", 00:08:31.168 "uuid": "b2bc0299-aede-5900-ad92-0fd8131bf6bb", 00:08:31.168 "is_configured": true, 00:08:31.168 "data_offset": 2048, 00:08:31.168 "data_size": 63488 00:08:31.168 }, 00:08:31.168 { 00:08:31.168 "name": "BaseBdev2", 00:08:31.168 "uuid": "45e5e1c6-0faa-5b38-9da8-3548578a3035", 00:08:31.168 "is_configured": true, 00:08:31.168 "data_offset": 2048, 00:08:31.168 "data_size": 63488 00:08:31.168 }, 00:08:31.168 { 00:08:31.168 "name": "BaseBdev3", 00:08:31.168 "uuid": "d5ab9c20-58a6-5c90-b0ae-15c02f847b4a", 00:08:31.168 "is_configured": true, 00:08:31.168 "data_offset": 2048, 00:08:31.168 "data_size": 63488 00:08:31.168 } 00:08:31.168 ] 00:08:31.168 }' 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.168 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.426 09:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.426 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.426 09:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.426 [2024-10-30 09:43:09.997130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.426 [2024-10-30 09:43:09.997166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.426 [2024-10-30 09:43:10.000706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.426 [2024-10-30 09:43:10.000763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.426 [2024-10-30 09:43:10.000873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.426 [2024-10-30 09:43:10.000883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:31.426 { 00:08:31.426 "results": [ 00:08:31.426 { 00:08:31.426 "job": "raid_bdev1", 00:08:31.426 "core_mask": "0x1", 00:08:31.426 "workload": "randrw", 00:08:31.426 "percentage": 50, 00:08:31.426 "status": "finished", 00:08:31.426 "queue_depth": 1, 00:08:31.426 "io_size": 131072, 00:08:31.426 "runtime": 1.237962, 00:08:31.426 "iops": 14099.786584725542, 00:08:31.426 "mibps": 1762.4733230906927, 00:08:31.426 "io_failed": 0, 00:08:31.426 "io_timeout": 0, 00:08:31.426 "avg_latency_us": 67.74149333450852, 00:08:31.426 "min_latency_us": 29.341538461538462, 00:08:31.426 "max_latency_us": 1789.636923076923 00:08:31.426 } 00:08:31.426 ], 00:08:31.426 "core_count": 1 00:08:31.426 } 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67501 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67501 ']' 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67501 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67501 00:08:31.426 killing process with pid 67501 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67501' 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67501 00:08:31.426 [2024-10-30 09:43:10.028272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.426 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67501 00:08:31.685 [2024-10-30 09:43:10.171271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Tq7eYUpgCy 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:32.630 ************************************ 00:08:32.630 END TEST raid_read_error_test 00:08:32.630 ************************************ 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:32.630 00:08:32.630 real 0m3.670s 00:08:32.630 user 0m4.433s 00:08:32.630 sys 0m0.382s 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.630 09:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.630 09:43:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:32.630 09:43:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:32.630 09:43:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.630 09:43:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.630 ************************************ 00:08:32.630 START TEST raid_write_error_test 00:08:32.630 ************************************ 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tDWs2tBykb 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67630 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67630 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67630 ']' 00:08:32.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.630 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:32.630 [2024-10-30 09:43:11.056928] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:32.630 [2024-10-30 09:43:11.057078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67630 ] 00:08:32.630 [2024-10-30 09:43:11.215675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.893 [2024-10-30 09:43:11.317614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.893 [2024-10-30 09:43:11.452907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.893 [2024-10-30 09:43:11.453133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 BaseBdev1_malloc 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 true 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 [2024-10-30 09:43:11.998134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:33.464 [2024-10-30 09:43:11.998189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.464 [2024-10-30 09:43:11.998214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:33.464 [2024-10-30 09:43:11.998228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.464 [2024-10-30 09:43:12.000500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.464 [2024-10-30 09:43:12.000636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:33.464 BaseBdev1 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 BaseBdev2_malloc 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 true 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 [2024-10-30 09:43:12.042175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:33.464 [2024-10-30 09:43:12.042232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.464 [2024-10-30 09:43:12.042253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:33.464 [2024-10-30 09:43:12.042267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.464 [2024-10-30 09:43:12.044504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.464 [2024-10-30 09:43:12.044546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:33.464 BaseBdev2 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.464 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.465 BaseBdev3_malloc 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.465 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.728 true 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.728 [2024-10-30 09:43:12.094146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:33.728 [2024-10-30 09:43:12.094199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.728 [2024-10-30 09:43:12.094222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:33.728 [2024-10-30 09:43:12.094236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.728 [2024-10-30 09:43:12.096542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.728 [2024-10-30 09:43:12.096581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:33.728 BaseBdev3 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.728 [2024-10-30 09:43:12.102226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.728 [2024-10-30 09:43:12.104205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.728 [2024-10-30 09:43:12.104352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.728 [2024-10-30 09:43:12.104586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:33.728 [2024-10-30 09:43:12.104623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:33.728 [2024-10-30 09:43:12.104943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.728 [2024-10-30 09:43:12.105177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:33.728 [2024-10-30 09:43:12.105254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:33.728 [2024-10-30 09:43:12.105984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.728 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.728 "name": "raid_bdev1", 00:08:33.728 "uuid": "0a138073-f819-4c8b-a4a8-f154b9d227fe", 00:08:33.728 "strip_size_kb": 0, 00:08:33.728 "state": "online", 00:08:33.728 "raid_level": "raid1", 00:08:33.728 "superblock": true, 00:08:33.728 "num_base_bdevs": 3, 00:08:33.728 "num_base_bdevs_discovered": 3, 00:08:33.728 "num_base_bdevs_operational": 3, 00:08:33.728 "base_bdevs_list": [ 00:08:33.728 { 00:08:33.728 "name": "BaseBdev1", 00:08:33.728 "uuid": "af9d93af-9d7b-5719-a377-9c99c0eb6924", 00:08:33.728 "is_configured": true, 00:08:33.728 "data_offset": 2048, 00:08:33.728 "data_size": 63488 00:08:33.728 }, 00:08:33.728 { 00:08:33.728 "name": "BaseBdev2", 00:08:33.728 "uuid": "88aeb63f-dd17-5416-95ee-bd900cc1cbd5", 00:08:33.728 "is_configured": true, 00:08:33.728 "data_offset": 2048, 00:08:33.728 "data_size": 63488 00:08:33.728 }, 00:08:33.729 { 00:08:33.729 "name": "BaseBdev3", 00:08:33.729 "uuid": "2cb99ef2-309b-54b5-b6a7-9a8557ce75a3", 00:08:33.729 "is_configured": true, 00:08:33.729 "data_offset": 2048, 00:08:33.729 "data_size": 63488 00:08:33.729 } 00:08:33.729 ] 00:08:33.729 }' 00:08:33.729 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.729 09:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.991 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.991 09:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.991 [2024-10-30 09:43:12.491273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.936 [2024-10-30 09:43:13.412345] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:34.936 [2024-10-30 09:43:13.412393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.936 [2024-10-30 09:43:13.412603] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.936 "name": "raid_bdev1", 00:08:34.936 "uuid": "0a138073-f819-4c8b-a4a8-f154b9d227fe", 00:08:34.936 "strip_size_kb": 0, 00:08:34.936 "state": "online", 00:08:34.936 "raid_level": "raid1", 00:08:34.936 "superblock": true, 00:08:34.936 "num_base_bdevs": 3, 00:08:34.936 "num_base_bdevs_discovered": 2, 00:08:34.936 "num_base_bdevs_operational": 2, 00:08:34.936 "base_bdevs_list": [ 00:08:34.936 { 00:08:34.936 "name": null, 00:08:34.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.936 "is_configured": false, 00:08:34.936 "data_offset": 0, 00:08:34.936 "data_size": 63488 00:08:34.936 }, 00:08:34.936 { 00:08:34.936 "name": "BaseBdev2", 00:08:34.936 "uuid": "88aeb63f-dd17-5416-95ee-bd900cc1cbd5", 00:08:34.936 "is_configured": true, 00:08:34.936 "data_offset": 2048, 00:08:34.936 "data_size": 63488 00:08:34.936 }, 00:08:34.936 { 00:08:34.936 "name": "BaseBdev3", 00:08:34.936 "uuid": "2cb99ef2-309b-54b5-b6a7-9a8557ce75a3", 00:08:34.936 "is_configured": true, 00:08:34.936 "data_offset": 2048, 00:08:34.936 "data_size": 63488 00:08:34.936 } 00:08:34.936 ] 00:08:34.936 }' 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.936 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.198 [2024-10-30 09:43:13.730547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.198 [2024-10-30 09:43:13.730577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.198 [2024-10-30 09:43:13.733590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.198 [2024-10-30 09:43:13.733641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.198 [2024-10-30 09:43:13.733723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.198 [2024-10-30 09:43:13.733738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:35.198 { 00:08:35.198 "results": [ 00:08:35.198 { 00:08:35.198 "job": "raid_bdev1", 00:08:35.198 "core_mask": "0x1", 00:08:35.198 "workload": "randrw", 00:08:35.198 "percentage": 50, 00:08:35.198 "status": "finished", 00:08:35.198 "queue_depth": 1, 00:08:35.198 "io_size": 131072, 00:08:35.198 "runtime": 1.237356, 00:08:35.198 "iops": 15041.750312763666, 00:08:35.198 "mibps": 1880.2187890954583, 00:08:35.198 "io_failed": 0, 00:08:35.198 "io_timeout": 0, 00:08:35.198 "avg_latency_us": 63.317741738167264, 00:08:35.198 "min_latency_us": 29.341538461538462, 00:08:35.198 "max_latency_us": 1663.6061538461538 00:08:35.198 } 00:08:35.198 ], 00:08:35.198 "core_count": 1 00:08:35.198 } 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67630 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67630 ']' 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67630 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67630 00:08:35.198 killing process with pid 67630 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67630' 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67630 00:08:35.198 [2024-10-30 09:43:13.765748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.198 09:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67630 00:08:35.459 [2024-10-30 09:43:13.909092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tDWs2tBykb 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:36.403 ************************************ 00:08:36.403 END TEST raid_write_error_test 00:08:36.403 ************************************ 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:36.403 00:08:36.403 real 0m3.682s 00:08:36.403 user 0m4.380s 00:08:36.403 sys 0m0.407s 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.403 09:43:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.403 09:43:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:36.403 09:43:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:36.403 09:43:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:36.403 09:43:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:36.403 09:43:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.403 09:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.403 ************************************ 00:08:36.403 START TEST raid_state_function_test 00:08:36.403 ************************************ 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.403 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.404 Process raid pid: 67768 00:08:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67768 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67768' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67768 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67768 ']' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.404 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.404 [2024-10-30 09:43:14.802195] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:36.404 [2024-10-30 09:43:14.802313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.404 [2024-10-30 09:43:14.957264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.665 [2024-10-30 09:43:15.058781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.665 [2024-10-30 09:43:15.195864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.665 [2024-10-30 09:43:15.195890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 [2024-10-30 09:43:15.665370] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.238 [2024-10-30 09:43:15.665418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.238 [2024-10-30 09:43:15.665429] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.238 [2024-10-30 09:43:15.665440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.238 [2024-10-30 09:43:15.665447] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.238 [2024-10-30 09:43:15.665456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.238 [2024-10-30 09:43:15.665463] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:37.238 [2024-10-30 09:43:15.665472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.238 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.238 "name": "Existed_Raid", 00:08:37.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.238 "strip_size_kb": 64, 00:08:37.238 "state": "configuring", 00:08:37.238 "raid_level": "raid0", 00:08:37.238 "superblock": false, 00:08:37.238 "num_base_bdevs": 4, 00:08:37.238 "num_base_bdevs_discovered": 0, 00:08:37.238 "num_base_bdevs_operational": 4, 00:08:37.238 "base_bdevs_list": [ 00:08:37.238 { 00:08:37.238 "name": "BaseBdev1", 00:08:37.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.238 "is_configured": false, 00:08:37.238 "data_offset": 0, 00:08:37.238 "data_size": 0 00:08:37.238 }, 00:08:37.238 { 00:08:37.239 "name": "BaseBdev2", 00:08:37.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.239 "is_configured": false, 00:08:37.239 "data_offset": 0, 00:08:37.239 "data_size": 0 00:08:37.239 }, 00:08:37.239 { 00:08:37.239 "name": "BaseBdev3", 00:08:37.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.239 "is_configured": false, 00:08:37.239 "data_offset": 0, 00:08:37.239 "data_size": 0 00:08:37.239 }, 00:08:37.239 { 00:08:37.239 "name": "BaseBdev4", 00:08:37.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.239 "is_configured": false, 00:08:37.239 "data_offset": 0, 00:08:37.239 "data_size": 0 00:08:37.239 } 00:08:37.239 ] 00:08:37.239 }' 00:08:37.239 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.239 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 [2024-10-30 09:43:15.993395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.500 [2024-10-30 09:43:15.993426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 [2024-10-30 09:43:16.001402] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.500 [2024-10-30 09:43:16.001437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.500 [2024-10-30 09:43:16.001446] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.500 [2024-10-30 09:43:16.001455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.500 [2024-10-30 09:43:16.001461] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.500 [2024-10-30 09:43:16.001470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.500 [2024-10-30 09:43:16.001476] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:37.500 [2024-10-30 09:43:16.001484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 [2024-10-30 09:43:16.033641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.500 BaseBdev1 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 [ 00:08:37.500 { 00:08:37.500 "name": "BaseBdev1", 00:08:37.500 "aliases": [ 00:08:37.500 "69043f49-60f9-483e-987b-97f9ff4313d7" 00:08:37.500 ], 00:08:37.500 "product_name": "Malloc disk", 00:08:37.500 "block_size": 512, 00:08:37.500 "num_blocks": 65536, 00:08:37.500 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:37.500 "assigned_rate_limits": { 00:08:37.500 "rw_ios_per_sec": 0, 00:08:37.500 "rw_mbytes_per_sec": 0, 00:08:37.500 "r_mbytes_per_sec": 0, 00:08:37.500 "w_mbytes_per_sec": 0 00:08:37.500 }, 00:08:37.500 "claimed": true, 00:08:37.500 "claim_type": "exclusive_write", 00:08:37.500 "zoned": false, 00:08:37.500 "supported_io_types": { 00:08:37.500 "read": true, 00:08:37.500 "write": true, 00:08:37.500 "unmap": true, 00:08:37.500 "flush": true, 00:08:37.500 "reset": true, 00:08:37.500 "nvme_admin": false, 00:08:37.500 "nvme_io": false, 00:08:37.500 "nvme_io_md": false, 00:08:37.500 "write_zeroes": true, 00:08:37.500 "zcopy": true, 00:08:37.500 "get_zone_info": false, 00:08:37.500 "zone_management": false, 00:08:37.500 "zone_append": false, 00:08:37.500 "compare": false, 00:08:37.500 "compare_and_write": false, 00:08:37.500 "abort": true, 00:08:37.500 "seek_hole": false, 00:08:37.500 "seek_data": false, 00:08:37.500 "copy": true, 00:08:37.500 "nvme_iov_md": false 00:08:37.500 }, 00:08:37.500 "memory_domains": [ 00:08:37.500 { 00:08:37.500 "dma_device_id": "system", 00:08:37.500 "dma_device_type": 1 00:08:37.500 }, 00:08:37.500 { 00:08:37.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.500 "dma_device_type": 2 00:08:37.500 } 00:08:37.500 ], 00:08:37.500 "driver_specific": {} 00:08:37.500 } 00:08:37.500 ] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.500 "name": "Existed_Raid", 00:08:37.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.500 "strip_size_kb": 64, 00:08:37.500 "state": "configuring", 00:08:37.500 "raid_level": "raid0", 00:08:37.500 "superblock": false, 00:08:37.500 "num_base_bdevs": 4, 00:08:37.500 "num_base_bdevs_discovered": 1, 00:08:37.500 "num_base_bdevs_operational": 4, 00:08:37.500 "base_bdevs_list": [ 00:08:37.500 { 00:08:37.500 "name": "BaseBdev1", 00:08:37.500 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:37.500 "is_configured": true, 00:08:37.500 "data_offset": 0, 00:08:37.500 "data_size": 65536 00:08:37.500 }, 00:08:37.500 { 00:08:37.500 "name": "BaseBdev2", 00:08:37.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.500 "is_configured": false, 00:08:37.500 "data_offset": 0, 00:08:37.500 "data_size": 0 00:08:37.500 }, 00:08:37.500 { 00:08:37.500 "name": "BaseBdev3", 00:08:37.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.500 "is_configured": false, 00:08:37.500 "data_offset": 0, 00:08:37.500 "data_size": 0 00:08:37.500 }, 00:08:37.500 { 00:08:37.500 "name": "BaseBdev4", 00:08:37.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.500 "is_configured": false, 00:08:37.500 "data_offset": 0, 00:08:37.500 "data_size": 0 00:08:37.500 } 00:08:37.500 ] 00:08:37.500 }' 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.500 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.074 [2024-10-30 09:43:16.389757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.074 [2024-10-30 09:43:16.389801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.074 [2024-10-30 09:43:16.397808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.074 [2024-10-30 09:43:16.399621] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.074 [2024-10-30 09:43:16.399661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.074 [2024-10-30 09:43:16.399670] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.074 [2024-10-30 09:43:16.399682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.074 [2024-10-30 09:43:16.399689] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:38.074 [2024-10-30 09:43:16.399698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.074 "name": "Existed_Raid", 00:08:38.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.074 "strip_size_kb": 64, 00:08:38.074 "state": "configuring", 00:08:38.074 "raid_level": "raid0", 00:08:38.074 "superblock": false, 00:08:38.074 "num_base_bdevs": 4, 00:08:38.074 "num_base_bdevs_discovered": 1, 00:08:38.074 "num_base_bdevs_operational": 4, 00:08:38.074 "base_bdevs_list": [ 00:08:38.074 { 00:08:38.074 "name": "BaseBdev1", 00:08:38.074 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:38.074 "is_configured": true, 00:08:38.074 "data_offset": 0, 00:08:38.074 "data_size": 65536 00:08:38.074 }, 00:08:38.074 { 00:08:38.074 "name": "BaseBdev2", 00:08:38.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.074 "is_configured": false, 00:08:38.074 "data_offset": 0, 00:08:38.074 "data_size": 0 00:08:38.074 }, 00:08:38.074 { 00:08:38.074 "name": "BaseBdev3", 00:08:38.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.074 "is_configured": false, 00:08:38.074 "data_offset": 0, 00:08:38.074 "data_size": 0 00:08:38.074 }, 00:08:38.074 { 00:08:38.074 "name": "BaseBdev4", 00:08:38.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.074 "is_configured": false, 00:08:38.074 "data_offset": 0, 00:08:38.074 "data_size": 0 00:08:38.074 } 00:08:38.074 ] 00:08:38.074 }' 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.074 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 [2024-10-30 09:43:16.732399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.337 BaseBdev2 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.337 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 [ 00:08:38.337 { 00:08:38.337 "name": "BaseBdev2", 00:08:38.337 "aliases": [ 00:08:38.337 "1f077e6b-4184-4a3c-b641-cf0bc2270839" 00:08:38.337 ], 00:08:38.337 "product_name": "Malloc disk", 00:08:38.337 "block_size": 512, 00:08:38.338 "num_blocks": 65536, 00:08:38.338 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:38.338 "assigned_rate_limits": { 00:08:38.338 "rw_ios_per_sec": 0, 00:08:38.338 "rw_mbytes_per_sec": 0, 00:08:38.338 "r_mbytes_per_sec": 0, 00:08:38.338 "w_mbytes_per_sec": 0 00:08:38.338 }, 00:08:38.338 "claimed": true, 00:08:38.338 "claim_type": "exclusive_write", 00:08:38.338 "zoned": false, 00:08:38.338 "supported_io_types": { 00:08:38.338 "read": true, 00:08:38.338 "write": true, 00:08:38.338 "unmap": true, 00:08:38.338 "flush": true, 00:08:38.338 "reset": true, 00:08:38.338 "nvme_admin": false, 00:08:38.338 "nvme_io": false, 00:08:38.338 "nvme_io_md": false, 00:08:38.338 "write_zeroes": true, 00:08:38.338 "zcopy": true, 00:08:38.338 "get_zone_info": false, 00:08:38.338 "zone_management": false, 00:08:38.338 "zone_append": false, 00:08:38.338 "compare": false, 00:08:38.338 "compare_and_write": false, 00:08:38.338 "abort": true, 00:08:38.338 "seek_hole": false, 00:08:38.338 "seek_data": false, 00:08:38.338 "copy": true, 00:08:38.338 "nvme_iov_md": false 00:08:38.338 }, 00:08:38.338 "memory_domains": [ 00:08:38.338 { 00:08:38.338 "dma_device_id": "system", 00:08:38.338 "dma_device_type": 1 00:08:38.338 }, 00:08:38.338 { 00:08:38.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.338 "dma_device_type": 2 00:08:38.338 } 00:08:38.338 ], 00:08:38.338 "driver_specific": {} 00:08:38.338 } 00:08:38.338 ] 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.338 "name": "Existed_Raid", 00:08:38.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.338 "strip_size_kb": 64, 00:08:38.338 "state": "configuring", 00:08:38.338 "raid_level": "raid0", 00:08:38.338 "superblock": false, 00:08:38.338 "num_base_bdevs": 4, 00:08:38.338 "num_base_bdevs_discovered": 2, 00:08:38.338 "num_base_bdevs_operational": 4, 00:08:38.338 "base_bdevs_list": [ 00:08:38.338 { 00:08:38.338 "name": "BaseBdev1", 00:08:38.338 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:38.338 "is_configured": true, 00:08:38.338 "data_offset": 0, 00:08:38.338 "data_size": 65536 00:08:38.338 }, 00:08:38.338 { 00:08:38.338 "name": "BaseBdev2", 00:08:38.338 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:38.338 "is_configured": true, 00:08:38.338 "data_offset": 0, 00:08:38.338 "data_size": 65536 00:08:38.338 }, 00:08:38.338 { 00:08:38.338 "name": "BaseBdev3", 00:08:38.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.338 "is_configured": false, 00:08:38.338 "data_offset": 0, 00:08:38.338 "data_size": 0 00:08:38.338 }, 00:08:38.338 { 00:08:38.338 "name": "BaseBdev4", 00:08:38.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.338 "is_configured": false, 00:08:38.338 "data_offset": 0, 00:08:38.338 "data_size": 0 00:08:38.338 } 00:08:38.338 ] 00:08:38.338 }' 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.338 09:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 [2024-10-30 09:43:17.098891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.663 BaseBdev3 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.663 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 [ 00:08:38.663 { 00:08:38.663 "name": "BaseBdev3", 00:08:38.663 "aliases": [ 00:08:38.663 "35bbbf94-413f-46a3-807e-ba07865d8686" 00:08:38.663 ], 00:08:38.663 "product_name": "Malloc disk", 00:08:38.663 "block_size": 512, 00:08:38.663 "num_blocks": 65536, 00:08:38.663 "uuid": "35bbbf94-413f-46a3-807e-ba07865d8686", 00:08:38.663 "assigned_rate_limits": { 00:08:38.663 "rw_ios_per_sec": 0, 00:08:38.663 "rw_mbytes_per_sec": 0, 00:08:38.663 "r_mbytes_per_sec": 0, 00:08:38.663 "w_mbytes_per_sec": 0 00:08:38.663 }, 00:08:38.663 "claimed": true, 00:08:38.663 "claim_type": "exclusive_write", 00:08:38.663 "zoned": false, 00:08:38.663 "supported_io_types": { 00:08:38.663 "read": true, 00:08:38.663 "write": true, 00:08:38.663 "unmap": true, 00:08:38.663 "flush": true, 00:08:38.664 "reset": true, 00:08:38.664 "nvme_admin": false, 00:08:38.664 "nvme_io": false, 00:08:38.664 "nvme_io_md": false, 00:08:38.664 "write_zeroes": true, 00:08:38.664 "zcopy": true, 00:08:38.664 "get_zone_info": false, 00:08:38.664 "zone_management": false, 00:08:38.664 "zone_append": false, 00:08:38.664 "compare": false, 00:08:38.664 "compare_and_write": false, 00:08:38.664 "abort": true, 00:08:38.664 "seek_hole": false, 00:08:38.664 "seek_data": false, 00:08:38.664 "copy": true, 00:08:38.664 "nvme_iov_md": false 00:08:38.664 }, 00:08:38.664 "memory_domains": [ 00:08:38.664 { 00:08:38.664 "dma_device_id": "system", 00:08:38.664 "dma_device_type": 1 00:08:38.664 }, 00:08:38.664 { 00:08:38.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.664 "dma_device_type": 2 00:08:38.664 } 00:08:38.664 ], 00:08:38.664 "driver_specific": {} 00:08:38.664 } 00:08:38.664 ] 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.664 "name": "Existed_Raid", 00:08:38.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.664 "strip_size_kb": 64, 00:08:38.664 "state": "configuring", 00:08:38.664 "raid_level": "raid0", 00:08:38.664 "superblock": false, 00:08:38.664 "num_base_bdevs": 4, 00:08:38.664 "num_base_bdevs_discovered": 3, 00:08:38.664 "num_base_bdevs_operational": 4, 00:08:38.664 "base_bdevs_list": [ 00:08:38.664 { 00:08:38.664 "name": "BaseBdev1", 00:08:38.664 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:38.664 "is_configured": true, 00:08:38.664 "data_offset": 0, 00:08:38.664 "data_size": 65536 00:08:38.664 }, 00:08:38.664 { 00:08:38.664 "name": "BaseBdev2", 00:08:38.664 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:38.664 "is_configured": true, 00:08:38.664 "data_offset": 0, 00:08:38.664 "data_size": 65536 00:08:38.664 }, 00:08:38.664 { 00:08:38.664 "name": "BaseBdev3", 00:08:38.664 "uuid": "35bbbf94-413f-46a3-807e-ba07865d8686", 00:08:38.664 "is_configured": true, 00:08:38.664 "data_offset": 0, 00:08:38.664 "data_size": 65536 00:08:38.664 }, 00:08:38.664 { 00:08:38.664 "name": "BaseBdev4", 00:08:38.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.664 "is_configured": false, 00:08:38.664 "data_offset": 0, 00:08:38.664 "data_size": 0 00:08:38.664 } 00:08:38.664 ] 00:08:38.664 }' 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.664 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.970 [2024-10-30 09:43:17.473471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:38.970 [2024-10-30 09:43:17.473559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.970 [2024-10-30 09:43:17.473583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:38.970 [2024-10-30 09:43:17.473861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:38.970 [2024-10-30 09:43:17.474025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.970 [2024-10-30 09:43:17.474082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.970 [2024-10-30 09:43:17.474397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.970 BaseBdev4 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.970 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.970 [ 00:08:38.970 { 00:08:38.970 "name": "BaseBdev4", 00:08:38.970 "aliases": [ 00:08:38.970 "6e1f0331-631f-4138-9152-c153aacc1ab9" 00:08:38.970 ], 00:08:38.970 "product_name": "Malloc disk", 00:08:38.970 "block_size": 512, 00:08:38.970 "num_blocks": 65536, 00:08:38.970 "uuid": "6e1f0331-631f-4138-9152-c153aacc1ab9", 00:08:38.970 "assigned_rate_limits": { 00:08:38.970 "rw_ios_per_sec": 0, 00:08:38.970 "rw_mbytes_per_sec": 0, 00:08:38.970 "r_mbytes_per_sec": 0, 00:08:38.970 "w_mbytes_per_sec": 0 00:08:38.970 }, 00:08:38.970 "claimed": true, 00:08:38.970 "claim_type": "exclusive_write", 00:08:38.970 "zoned": false, 00:08:38.970 "supported_io_types": { 00:08:38.970 "read": true, 00:08:38.970 "write": true, 00:08:38.970 "unmap": true, 00:08:38.970 "flush": true, 00:08:38.970 "reset": true, 00:08:38.970 "nvme_admin": false, 00:08:38.970 "nvme_io": false, 00:08:38.970 "nvme_io_md": false, 00:08:38.970 "write_zeroes": true, 00:08:38.970 "zcopy": true, 00:08:38.970 "get_zone_info": false, 00:08:38.970 "zone_management": false, 00:08:38.971 "zone_append": false, 00:08:38.971 "compare": false, 00:08:38.971 "compare_and_write": false, 00:08:38.971 "abort": true, 00:08:38.971 "seek_hole": false, 00:08:38.971 "seek_data": false, 00:08:38.971 "copy": true, 00:08:38.971 "nvme_iov_md": false 00:08:38.971 }, 00:08:38.971 "memory_domains": [ 00:08:38.971 { 00:08:38.971 "dma_device_id": "system", 00:08:38.971 "dma_device_type": 1 00:08:38.971 }, 00:08:38.971 { 00:08:38.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.971 "dma_device_type": 2 00:08:38.971 } 00:08:38.971 ], 00:08:38.971 "driver_specific": {} 00:08:38.971 } 00:08:38.971 ] 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.971 "name": "Existed_Raid", 00:08:38.971 "uuid": "3138895f-6595-484b-95a8-1a4c4e02f0f8", 00:08:38.971 "strip_size_kb": 64, 00:08:38.971 "state": "online", 00:08:38.971 "raid_level": "raid0", 00:08:38.971 "superblock": false, 00:08:38.971 "num_base_bdevs": 4, 00:08:38.971 "num_base_bdevs_discovered": 4, 00:08:38.971 "num_base_bdevs_operational": 4, 00:08:38.971 "base_bdevs_list": [ 00:08:38.971 { 00:08:38.971 "name": "BaseBdev1", 00:08:38.971 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:38.971 "is_configured": true, 00:08:38.971 "data_offset": 0, 00:08:38.971 "data_size": 65536 00:08:38.971 }, 00:08:38.971 { 00:08:38.971 "name": "BaseBdev2", 00:08:38.971 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:38.971 "is_configured": true, 00:08:38.971 "data_offset": 0, 00:08:38.971 "data_size": 65536 00:08:38.971 }, 00:08:38.971 { 00:08:38.971 "name": "BaseBdev3", 00:08:38.971 "uuid": "35bbbf94-413f-46a3-807e-ba07865d8686", 00:08:38.971 "is_configured": true, 00:08:38.971 "data_offset": 0, 00:08:38.971 "data_size": 65536 00:08:38.971 }, 00:08:38.971 { 00:08:38.971 "name": "BaseBdev4", 00:08:38.971 "uuid": "6e1f0331-631f-4138-9152-c153aacc1ab9", 00:08:38.971 "is_configured": true, 00:08:38.971 "data_offset": 0, 00:08:38.971 "data_size": 65536 00:08:38.971 } 00:08:38.971 ] 00:08:38.971 }' 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.971 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.232 [2024-10-30 09:43:17.829964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.232 "name": "Existed_Raid", 00:08:39.232 "aliases": [ 00:08:39.232 "3138895f-6595-484b-95a8-1a4c4e02f0f8" 00:08:39.232 ], 00:08:39.232 "product_name": "Raid Volume", 00:08:39.232 "block_size": 512, 00:08:39.232 "num_blocks": 262144, 00:08:39.232 "uuid": "3138895f-6595-484b-95a8-1a4c4e02f0f8", 00:08:39.232 "assigned_rate_limits": { 00:08:39.232 "rw_ios_per_sec": 0, 00:08:39.232 "rw_mbytes_per_sec": 0, 00:08:39.232 "r_mbytes_per_sec": 0, 00:08:39.232 "w_mbytes_per_sec": 0 00:08:39.232 }, 00:08:39.232 "claimed": false, 00:08:39.232 "zoned": false, 00:08:39.232 "supported_io_types": { 00:08:39.232 "read": true, 00:08:39.232 "write": true, 00:08:39.232 "unmap": true, 00:08:39.232 "flush": true, 00:08:39.232 "reset": true, 00:08:39.232 "nvme_admin": false, 00:08:39.232 "nvme_io": false, 00:08:39.232 "nvme_io_md": false, 00:08:39.232 "write_zeroes": true, 00:08:39.232 "zcopy": false, 00:08:39.232 "get_zone_info": false, 00:08:39.232 "zone_management": false, 00:08:39.232 "zone_append": false, 00:08:39.232 "compare": false, 00:08:39.232 "compare_and_write": false, 00:08:39.232 "abort": false, 00:08:39.232 "seek_hole": false, 00:08:39.232 "seek_data": false, 00:08:39.232 "copy": false, 00:08:39.232 "nvme_iov_md": false 00:08:39.232 }, 00:08:39.232 "memory_domains": [ 00:08:39.232 { 00:08:39.232 "dma_device_id": "system", 00:08:39.232 "dma_device_type": 1 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.232 "dma_device_type": 2 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "system", 00:08:39.232 "dma_device_type": 1 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.232 "dma_device_type": 2 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "system", 00:08:39.232 "dma_device_type": 1 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.232 "dma_device_type": 2 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "system", 00:08:39.232 "dma_device_type": 1 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.232 "dma_device_type": 2 00:08:39.232 } 00:08:39.232 ], 00:08:39.232 "driver_specific": { 00:08:39.232 "raid": { 00:08:39.232 "uuid": "3138895f-6595-484b-95a8-1a4c4e02f0f8", 00:08:39.232 "strip_size_kb": 64, 00:08:39.232 "state": "online", 00:08:39.232 "raid_level": "raid0", 00:08:39.232 "superblock": false, 00:08:39.232 "num_base_bdevs": 4, 00:08:39.232 "num_base_bdevs_discovered": 4, 00:08:39.232 "num_base_bdevs_operational": 4, 00:08:39.232 "base_bdevs_list": [ 00:08:39.232 { 00:08:39.232 "name": "BaseBdev1", 00:08:39.232 "uuid": "69043f49-60f9-483e-987b-97f9ff4313d7", 00:08:39.232 "is_configured": true, 00:08:39.232 "data_offset": 0, 00:08:39.232 "data_size": 65536 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "name": "BaseBdev2", 00:08:39.232 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:39.232 "is_configured": true, 00:08:39.232 "data_offset": 0, 00:08:39.232 "data_size": 65536 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "name": "BaseBdev3", 00:08:39.232 "uuid": "35bbbf94-413f-46a3-807e-ba07865d8686", 00:08:39.232 "is_configured": true, 00:08:39.232 "data_offset": 0, 00:08:39.232 "data_size": 65536 00:08:39.232 }, 00:08:39.232 { 00:08:39.232 "name": "BaseBdev4", 00:08:39.232 "uuid": "6e1f0331-631f-4138-9152-c153aacc1ab9", 00:08:39.232 "is_configured": true, 00:08:39.232 "data_offset": 0, 00:08:39.232 "data_size": 65536 00:08:39.232 } 00:08:39.232 ] 00:08:39.232 } 00:08:39.232 } 00:08:39.232 }' 00:08:39.232 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.494 BaseBdev2 00:08:39.494 BaseBdev3 00:08:39.494 BaseBdev4' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.494 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.494 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.494 [2024-10-30 09:43:18.049716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.494 [2024-10-30 09:43:18.049839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.494 [2024-10-30 09:43:18.050008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.755 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.755 "name": "Existed_Raid", 00:08:39.755 "uuid": "3138895f-6595-484b-95a8-1a4c4e02f0f8", 00:08:39.755 "strip_size_kb": 64, 00:08:39.755 "state": "offline", 00:08:39.755 "raid_level": "raid0", 00:08:39.755 "superblock": false, 00:08:39.755 "num_base_bdevs": 4, 00:08:39.755 "num_base_bdevs_discovered": 3, 00:08:39.755 "num_base_bdevs_operational": 3, 00:08:39.755 "base_bdevs_list": [ 00:08:39.755 { 00:08:39.755 "name": null, 00:08:39.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.755 "is_configured": false, 00:08:39.755 "data_offset": 0, 00:08:39.755 "data_size": 65536 00:08:39.755 }, 00:08:39.755 { 00:08:39.755 "name": "BaseBdev2", 00:08:39.755 "uuid": "1f077e6b-4184-4a3c-b641-cf0bc2270839", 00:08:39.755 "is_configured": true, 00:08:39.755 "data_offset": 0, 00:08:39.755 "data_size": 65536 00:08:39.755 }, 00:08:39.755 { 00:08:39.755 "name": "BaseBdev3", 00:08:39.756 "uuid": "35bbbf94-413f-46a3-807e-ba07865d8686", 00:08:39.756 "is_configured": true, 00:08:39.756 "data_offset": 0, 00:08:39.756 "data_size": 65536 00:08:39.756 }, 00:08:39.756 { 00:08:39.756 "name": "BaseBdev4", 00:08:39.756 "uuid": "6e1f0331-631f-4138-9152-c153aacc1ab9", 00:08:39.756 "is_configured": true, 00:08:39.756 "data_offset": 0, 00:08:39.756 "data_size": 65536 00:08:39.756 } 00:08:39.756 ] 00:08:39.756 }' 00:08:39.756 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.756 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.017 [2024-10-30 09:43:18.477157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.017 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.018 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.018 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.018 [2024-10-30 09:43:18.574708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.018 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.018 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.018 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 [2024-10-30 09:43:18.672534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:40.281 [2024-10-30 09:43:18.672666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 BaseBdev2 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.281 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.281 [ 00:08:40.281 { 00:08:40.281 "name": "BaseBdev2", 00:08:40.281 "aliases": [ 00:08:40.281 "d8105fe9-ee89-4ed6-8685-68e84fc32d12" 00:08:40.281 ], 00:08:40.281 "product_name": "Malloc disk", 00:08:40.281 "block_size": 512, 00:08:40.281 "num_blocks": 65536, 00:08:40.281 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:40.281 "assigned_rate_limits": { 00:08:40.281 "rw_ios_per_sec": 0, 00:08:40.281 "rw_mbytes_per_sec": 0, 00:08:40.281 "r_mbytes_per_sec": 0, 00:08:40.281 "w_mbytes_per_sec": 0 00:08:40.281 }, 00:08:40.281 "claimed": false, 00:08:40.281 "zoned": false, 00:08:40.281 "supported_io_types": { 00:08:40.281 "read": true, 00:08:40.281 "write": true, 00:08:40.281 "unmap": true, 00:08:40.282 "flush": true, 00:08:40.282 "reset": true, 00:08:40.282 "nvme_admin": false, 00:08:40.282 "nvme_io": false, 00:08:40.282 "nvme_io_md": false, 00:08:40.282 "write_zeroes": true, 00:08:40.282 "zcopy": true, 00:08:40.282 "get_zone_info": false, 00:08:40.282 "zone_management": false, 00:08:40.282 "zone_append": false, 00:08:40.282 "compare": false, 00:08:40.282 "compare_and_write": false, 00:08:40.282 "abort": true, 00:08:40.282 "seek_hole": false, 00:08:40.282 "seek_data": false, 00:08:40.282 "copy": true, 00:08:40.282 "nvme_iov_md": false 00:08:40.282 }, 00:08:40.282 "memory_domains": [ 00:08:40.282 { 00:08:40.282 "dma_device_id": "system", 00:08:40.282 "dma_device_type": 1 00:08:40.282 }, 00:08:40.282 { 00:08:40.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.282 "dma_device_type": 2 00:08:40.282 } 00:08:40.282 ], 00:08:40.282 "driver_specific": {} 00:08:40.282 } 00:08:40.282 ] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.282 BaseBdev3 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.282 [ 00:08:40.282 { 00:08:40.282 "name": "BaseBdev3", 00:08:40.282 "aliases": [ 00:08:40.282 "ebdac896-e559-4caf-8de3-38035a82d479" 00:08:40.282 ], 00:08:40.282 "product_name": "Malloc disk", 00:08:40.282 "block_size": 512, 00:08:40.282 "num_blocks": 65536, 00:08:40.282 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:40.282 "assigned_rate_limits": { 00:08:40.282 "rw_ios_per_sec": 0, 00:08:40.282 "rw_mbytes_per_sec": 0, 00:08:40.282 "r_mbytes_per_sec": 0, 00:08:40.282 "w_mbytes_per_sec": 0 00:08:40.282 }, 00:08:40.282 "claimed": false, 00:08:40.282 "zoned": false, 00:08:40.282 "supported_io_types": { 00:08:40.282 "read": true, 00:08:40.282 "write": true, 00:08:40.282 "unmap": true, 00:08:40.282 "flush": true, 00:08:40.282 "reset": true, 00:08:40.282 "nvme_admin": false, 00:08:40.282 "nvme_io": false, 00:08:40.282 "nvme_io_md": false, 00:08:40.282 "write_zeroes": true, 00:08:40.282 "zcopy": true, 00:08:40.282 "get_zone_info": false, 00:08:40.282 "zone_management": false, 00:08:40.282 "zone_append": false, 00:08:40.282 "compare": false, 00:08:40.282 "compare_and_write": false, 00:08:40.282 "abort": true, 00:08:40.282 "seek_hole": false, 00:08:40.282 "seek_data": false, 00:08:40.282 "copy": true, 00:08:40.282 "nvme_iov_md": false 00:08:40.282 }, 00:08:40.282 "memory_domains": [ 00:08:40.282 { 00:08:40.282 "dma_device_id": "system", 00:08:40.282 "dma_device_type": 1 00:08:40.282 }, 00:08:40.282 { 00:08:40.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.282 "dma_device_type": 2 00:08:40.282 } 00:08:40.282 ], 00:08:40.282 "driver_specific": {} 00:08:40.282 } 00:08:40.282 ] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.282 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.545 BaseBdev4 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.545 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.545 [ 00:08:40.545 { 00:08:40.545 "name": "BaseBdev4", 00:08:40.545 "aliases": [ 00:08:40.545 "0deffebf-c21d-4267-b8a1-7e9b041e4de0" 00:08:40.545 ], 00:08:40.545 "product_name": "Malloc disk", 00:08:40.545 "block_size": 512, 00:08:40.545 "num_blocks": 65536, 00:08:40.545 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:40.545 "assigned_rate_limits": { 00:08:40.545 "rw_ios_per_sec": 0, 00:08:40.545 "rw_mbytes_per_sec": 0, 00:08:40.545 "r_mbytes_per_sec": 0, 00:08:40.545 "w_mbytes_per_sec": 0 00:08:40.545 }, 00:08:40.545 "claimed": false, 00:08:40.545 "zoned": false, 00:08:40.545 "supported_io_types": { 00:08:40.545 "read": true, 00:08:40.545 "write": true, 00:08:40.545 "unmap": true, 00:08:40.545 "flush": true, 00:08:40.545 "reset": true, 00:08:40.545 "nvme_admin": false, 00:08:40.545 "nvme_io": false, 00:08:40.545 "nvme_io_md": false, 00:08:40.545 "write_zeroes": true, 00:08:40.545 "zcopy": true, 00:08:40.545 "get_zone_info": false, 00:08:40.545 "zone_management": false, 00:08:40.545 "zone_append": false, 00:08:40.545 "compare": false, 00:08:40.545 "compare_and_write": false, 00:08:40.546 "abort": true, 00:08:40.546 "seek_hole": false, 00:08:40.546 "seek_data": false, 00:08:40.546 "copy": true, 00:08:40.546 "nvme_iov_md": false 00:08:40.546 }, 00:08:40.546 "memory_domains": [ 00:08:40.546 { 00:08:40.546 "dma_device_id": "system", 00:08:40.546 "dma_device_type": 1 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.546 "dma_device_type": 2 00:08:40.546 } 00:08:40.546 ], 00:08:40.546 "driver_specific": {} 00:08:40.546 } 00:08:40.546 ] 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.546 [2024-10-30 09:43:18.947052] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.546 [2024-10-30 09:43:18.947193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.546 [2024-10-30 09:43:18.947260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.546 [2024-10-30 09:43:18.949138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.546 [2024-10-30 09:43:18.949264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.546 "name": "Existed_Raid", 00:08:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.546 "strip_size_kb": 64, 00:08:40.546 "state": "configuring", 00:08:40.546 "raid_level": "raid0", 00:08:40.546 "superblock": false, 00:08:40.546 "num_base_bdevs": 4, 00:08:40.546 "num_base_bdevs_discovered": 3, 00:08:40.546 "num_base_bdevs_operational": 4, 00:08:40.546 "base_bdevs_list": [ 00:08:40.546 { 00:08:40.546 "name": "BaseBdev1", 00:08:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.546 "is_configured": false, 00:08:40.546 "data_offset": 0, 00:08:40.546 "data_size": 0 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "name": "BaseBdev2", 00:08:40.546 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:40.546 "is_configured": true, 00:08:40.546 "data_offset": 0, 00:08:40.546 "data_size": 65536 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "name": "BaseBdev3", 00:08:40.546 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:40.546 "is_configured": true, 00:08:40.546 "data_offset": 0, 00:08:40.546 "data_size": 65536 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "name": "BaseBdev4", 00:08:40.546 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:40.546 "is_configured": true, 00:08:40.546 "data_offset": 0, 00:08:40.546 "data_size": 65536 00:08:40.546 } 00:08:40.546 ] 00:08:40.546 }' 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.546 09:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.808 [2024-10-30 09:43:19.311151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.808 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.808 "name": "Existed_Raid", 00:08:40.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.808 "strip_size_kb": 64, 00:08:40.808 "state": "configuring", 00:08:40.808 "raid_level": "raid0", 00:08:40.808 "superblock": false, 00:08:40.808 "num_base_bdevs": 4, 00:08:40.808 "num_base_bdevs_discovered": 2, 00:08:40.808 "num_base_bdevs_operational": 4, 00:08:40.808 "base_bdevs_list": [ 00:08:40.808 { 00:08:40.808 "name": "BaseBdev1", 00:08:40.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.808 "is_configured": false, 00:08:40.808 "data_offset": 0, 00:08:40.808 "data_size": 0 00:08:40.808 }, 00:08:40.808 { 00:08:40.808 "name": null, 00:08:40.808 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:40.808 "is_configured": false, 00:08:40.808 "data_offset": 0, 00:08:40.808 "data_size": 65536 00:08:40.808 }, 00:08:40.808 { 00:08:40.808 "name": "BaseBdev3", 00:08:40.808 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:40.808 "is_configured": true, 00:08:40.808 "data_offset": 0, 00:08:40.808 "data_size": 65536 00:08:40.808 }, 00:08:40.808 { 00:08:40.808 "name": "BaseBdev4", 00:08:40.808 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:40.809 "is_configured": true, 00:08:40.809 "data_offset": 0, 00:08:40.809 "data_size": 65536 00:08:40.809 } 00:08:40.809 ] 00:08:40.809 }' 00:08:40.809 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.809 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.071 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.071 [2024-10-30 09:43:19.689552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.333 BaseBdev1 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 [ 00:08:41.333 { 00:08:41.333 "name": "BaseBdev1", 00:08:41.333 "aliases": [ 00:08:41.333 "17daf977-f371-49be-91d4-3029942a7345" 00:08:41.333 ], 00:08:41.333 "product_name": "Malloc disk", 00:08:41.333 "block_size": 512, 00:08:41.333 "num_blocks": 65536, 00:08:41.333 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:41.333 "assigned_rate_limits": { 00:08:41.333 "rw_ios_per_sec": 0, 00:08:41.333 "rw_mbytes_per_sec": 0, 00:08:41.333 "r_mbytes_per_sec": 0, 00:08:41.333 "w_mbytes_per_sec": 0 00:08:41.333 }, 00:08:41.333 "claimed": true, 00:08:41.333 "claim_type": "exclusive_write", 00:08:41.333 "zoned": false, 00:08:41.333 "supported_io_types": { 00:08:41.333 "read": true, 00:08:41.333 "write": true, 00:08:41.333 "unmap": true, 00:08:41.333 "flush": true, 00:08:41.333 "reset": true, 00:08:41.333 "nvme_admin": false, 00:08:41.333 "nvme_io": false, 00:08:41.333 "nvme_io_md": false, 00:08:41.333 "write_zeroes": true, 00:08:41.333 "zcopy": true, 00:08:41.333 "get_zone_info": false, 00:08:41.333 "zone_management": false, 00:08:41.333 "zone_append": false, 00:08:41.333 "compare": false, 00:08:41.333 "compare_and_write": false, 00:08:41.333 "abort": true, 00:08:41.333 "seek_hole": false, 00:08:41.333 "seek_data": false, 00:08:41.333 "copy": true, 00:08:41.333 "nvme_iov_md": false 00:08:41.333 }, 00:08:41.333 "memory_domains": [ 00:08:41.333 { 00:08:41.333 "dma_device_id": "system", 00:08:41.333 "dma_device_type": 1 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.333 "dma_device_type": 2 00:08:41.333 } 00:08:41.333 ], 00:08:41.333 "driver_specific": {} 00:08:41.333 } 00:08:41.333 ] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.333 "name": "Existed_Raid", 00:08:41.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.333 "strip_size_kb": 64, 00:08:41.333 "state": "configuring", 00:08:41.333 "raid_level": "raid0", 00:08:41.333 "superblock": false, 00:08:41.333 "num_base_bdevs": 4, 00:08:41.333 "num_base_bdevs_discovered": 3, 00:08:41.333 "num_base_bdevs_operational": 4, 00:08:41.333 "base_bdevs_list": [ 00:08:41.333 { 00:08:41.333 "name": "BaseBdev1", 00:08:41.333 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 0, 00:08:41.333 "data_size": 65536 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "name": null, 00:08:41.333 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:41.333 "is_configured": false, 00:08:41.333 "data_offset": 0, 00:08:41.333 "data_size": 65536 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "name": "BaseBdev3", 00:08:41.333 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 0, 00:08:41.333 "data_size": 65536 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "name": "BaseBdev4", 00:08:41.333 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 0, 00:08:41.333 "data_size": 65536 00:08:41.333 } 00:08:41.333 ] 00:08:41.333 }' 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.333 09:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 [2024-10-30 09:43:20.049708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.594 "name": "Existed_Raid", 00:08:41.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.594 "strip_size_kb": 64, 00:08:41.594 "state": "configuring", 00:08:41.594 "raid_level": "raid0", 00:08:41.594 "superblock": false, 00:08:41.594 "num_base_bdevs": 4, 00:08:41.594 "num_base_bdevs_discovered": 2, 00:08:41.594 "num_base_bdevs_operational": 4, 00:08:41.594 "base_bdevs_list": [ 00:08:41.594 { 00:08:41.594 "name": "BaseBdev1", 00:08:41.594 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:41.594 "is_configured": true, 00:08:41.594 "data_offset": 0, 00:08:41.594 "data_size": 65536 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "name": null, 00:08:41.594 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:41.594 "is_configured": false, 00:08:41.594 "data_offset": 0, 00:08:41.594 "data_size": 65536 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "name": null, 00:08:41.594 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:41.594 "is_configured": false, 00:08:41.594 "data_offset": 0, 00:08:41.594 "data_size": 65536 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "name": "BaseBdev4", 00:08:41.594 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:41.594 "is_configured": true, 00:08:41.594 "data_offset": 0, 00:08:41.594 "data_size": 65536 00:08:41.594 } 00:08:41.594 ] 00:08:41.594 }' 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.594 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 [2024-10-30 09:43:20.393791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.855 "name": "Existed_Raid", 00:08:41.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.855 "strip_size_kb": 64, 00:08:41.855 "state": "configuring", 00:08:41.855 "raid_level": "raid0", 00:08:41.855 "superblock": false, 00:08:41.855 "num_base_bdevs": 4, 00:08:41.855 "num_base_bdevs_discovered": 3, 00:08:41.855 "num_base_bdevs_operational": 4, 00:08:41.855 "base_bdevs_list": [ 00:08:41.855 { 00:08:41.855 "name": "BaseBdev1", 00:08:41.855 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:41.855 "is_configured": true, 00:08:41.855 "data_offset": 0, 00:08:41.855 "data_size": 65536 00:08:41.855 }, 00:08:41.855 { 00:08:41.855 "name": null, 00:08:41.855 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:41.855 "is_configured": false, 00:08:41.855 "data_offset": 0, 00:08:41.855 "data_size": 65536 00:08:41.855 }, 00:08:41.855 { 00:08:41.855 "name": "BaseBdev3", 00:08:41.855 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:41.855 "is_configured": true, 00:08:41.855 "data_offset": 0, 00:08:41.855 "data_size": 65536 00:08:41.855 }, 00:08:41.855 { 00:08:41.855 "name": "BaseBdev4", 00:08:41.855 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:41.855 "is_configured": true, 00:08:41.855 "data_offset": 0, 00:08:41.855 "data_size": 65536 00:08:41.855 } 00:08:41.855 ] 00:08:41.855 }' 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.855 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.115 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.115 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.378 [2024-10-30 09:43:20.745888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.378 "name": "Existed_Raid", 00:08:42.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.378 "strip_size_kb": 64, 00:08:42.378 "state": "configuring", 00:08:42.378 "raid_level": "raid0", 00:08:42.378 "superblock": false, 00:08:42.378 "num_base_bdevs": 4, 00:08:42.378 "num_base_bdevs_discovered": 2, 00:08:42.378 "num_base_bdevs_operational": 4, 00:08:42.378 "base_bdevs_list": [ 00:08:42.378 { 00:08:42.378 "name": null, 00:08:42.378 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:42.378 "is_configured": false, 00:08:42.378 "data_offset": 0, 00:08:42.378 "data_size": 65536 00:08:42.378 }, 00:08:42.378 { 00:08:42.378 "name": null, 00:08:42.378 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:42.378 "is_configured": false, 00:08:42.378 "data_offset": 0, 00:08:42.378 "data_size": 65536 00:08:42.378 }, 00:08:42.378 { 00:08:42.378 "name": "BaseBdev3", 00:08:42.378 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:42.378 "is_configured": true, 00:08:42.378 "data_offset": 0, 00:08:42.378 "data_size": 65536 00:08:42.378 }, 00:08:42.378 { 00:08:42.378 "name": "BaseBdev4", 00:08:42.378 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:42.378 "is_configured": true, 00:08:42.378 "data_offset": 0, 00:08:42.378 "data_size": 65536 00:08:42.378 } 00:08:42.378 ] 00:08:42.378 }' 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.378 09:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 [2024-10-30 09:43:21.151878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.640 "name": "Existed_Raid", 00:08:42.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.640 "strip_size_kb": 64, 00:08:42.640 "state": "configuring", 00:08:42.640 "raid_level": "raid0", 00:08:42.640 "superblock": false, 00:08:42.640 "num_base_bdevs": 4, 00:08:42.640 "num_base_bdevs_discovered": 3, 00:08:42.640 "num_base_bdevs_operational": 4, 00:08:42.640 "base_bdevs_list": [ 00:08:42.640 { 00:08:42.640 "name": null, 00:08:42.640 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:42.640 "is_configured": false, 00:08:42.640 "data_offset": 0, 00:08:42.640 "data_size": 65536 00:08:42.640 }, 00:08:42.640 { 00:08:42.640 "name": "BaseBdev2", 00:08:42.640 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:42.640 "is_configured": true, 00:08:42.640 "data_offset": 0, 00:08:42.640 "data_size": 65536 00:08:42.640 }, 00:08:42.640 { 00:08:42.640 "name": "BaseBdev3", 00:08:42.640 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:42.640 "is_configured": true, 00:08:42.640 "data_offset": 0, 00:08:42.640 "data_size": 65536 00:08:42.640 }, 00:08:42.640 { 00:08:42.640 "name": "BaseBdev4", 00:08:42.640 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:42.640 "is_configured": true, 00:08:42.640 "data_offset": 0, 00:08:42.640 "data_size": 65536 00:08:42.640 } 00:08:42.640 ] 00:08:42.640 }' 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.640 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 17daf977-f371-49be-91d4-3029942a7345 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.901 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.163 [2024-10-30 09:43:21.546088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:43.163 [2024-10-30 09:43:21.546126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.163 [2024-10-30 09:43:21.546133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:43.163 [2024-10-30 09:43:21.546382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:43.163 [2024-10-30 09:43:21.546503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.163 [2024-10-30 09:43:21.546518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:43.163 [2024-10-30 09:43:21.546713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.163 NewBaseBdev 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.163 [ 00:08:43.163 { 00:08:43.163 "name": "NewBaseBdev", 00:08:43.163 "aliases": [ 00:08:43.163 "17daf977-f371-49be-91d4-3029942a7345" 00:08:43.163 ], 00:08:43.163 "product_name": "Malloc disk", 00:08:43.163 "block_size": 512, 00:08:43.163 "num_blocks": 65536, 00:08:43.163 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:43.163 "assigned_rate_limits": { 00:08:43.163 "rw_ios_per_sec": 0, 00:08:43.163 "rw_mbytes_per_sec": 0, 00:08:43.163 "r_mbytes_per_sec": 0, 00:08:43.163 "w_mbytes_per_sec": 0 00:08:43.163 }, 00:08:43.163 "claimed": true, 00:08:43.163 "claim_type": "exclusive_write", 00:08:43.163 "zoned": false, 00:08:43.163 "supported_io_types": { 00:08:43.163 "read": true, 00:08:43.163 "write": true, 00:08:43.163 "unmap": true, 00:08:43.163 "flush": true, 00:08:43.163 "reset": true, 00:08:43.163 "nvme_admin": false, 00:08:43.163 "nvme_io": false, 00:08:43.163 "nvme_io_md": false, 00:08:43.163 "write_zeroes": true, 00:08:43.163 "zcopy": true, 00:08:43.163 "get_zone_info": false, 00:08:43.163 "zone_management": false, 00:08:43.163 "zone_append": false, 00:08:43.163 "compare": false, 00:08:43.163 "compare_and_write": false, 00:08:43.163 "abort": true, 00:08:43.163 "seek_hole": false, 00:08:43.163 "seek_data": false, 00:08:43.163 "copy": true, 00:08:43.163 "nvme_iov_md": false 00:08:43.163 }, 00:08:43.163 "memory_domains": [ 00:08:43.163 { 00:08:43.163 "dma_device_id": "system", 00:08:43.163 "dma_device_type": 1 00:08:43.163 }, 00:08:43.163 { 00:08:43.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.163 "dma_device_type": 2 00:08:43.163 } 00:08:43.163 ], 00:08:43.163 "driver_specific": {} 00:08:43.163 } 00:08:43.163 ] 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.163 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.164 "name": "Existed_Raid", 00:08:43.164 "uuid": "b2685d6e-af70-428a-ab17-1e07dc3fee9f", 00:08:43.164 "strip_size_kb": 64, 00:08:43.164 "state": "online", 00:08:43.164 "raid_level": "raid0", 00:08:43.164 "superblock": false, 00:08:43.164 "num_base_bdevs": 4, 00:08:43.164 "num_base_bdevs_discovered": 4, 00:08:43.164 "num_base_bdevs_operational": 4, 00:08:43.164 "base_bdevs_list": [ 00:08:43.164 { 00:08:43.164 "name": "NewBaseBdev", 00:08:43.164 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:43.164 "is_configured": true, 00:08:43.164 "data_offset": 0, 00:08:43.164 "data_size": 65536 00:08:43.164 }, 00:08:43.164 { 00:08:43.164 "name": "BaseBdev2", 00:08:43.164 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:43.164 "is_configured": true, 00:08:43.164 "data_offset": 0, 00:08:43.164 "data_size": 65536 00:08:43.164 }, 00:08:43.164 { 00:08:43.164 "name": "BaseBdev3", 00:08:43.164 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:43.164 "is_configured": true, 00:08:43.164 "data_offset": 0, 00:08:43.164 "data_size": 65536 00:08:43.164 }, 00:08:43.164 { 00:08:43.164 "name": "BaseBdev4", 00:08:43.164 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:43.164 "is_configured": true, 00:08:43.164 "data_offset": 0, 00:08:43.164 "data_size": 65536 00:08:43.164 } 00:08:43.164 ] 00:08:43.164 }' 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.164 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.424 [2024-10-30 09:43:21.890580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.424 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.424 "name": "Existed_Raid", 00:08:43.424 "aliases": [ 00:08:43.424 "b2685d6e-af70-428a-ab17-1e07dc3fee9f" 00:08:43.424 ], 00:08:43.424 "product_name": "Raid Volume", 00:08:43.424 "block_size": 512, 00:08:43.424 "num_blocks": 262144, 00:08:43.424 "uuid": "b2685d6e-af70-428a-ab17-1e07dc3fee9f", 00:08:43.424 "assigned_rate_limits": { 00:08:43.424 "rw_ios_per_sec": 0, 00:08:43.424 "rw_mbytes_per_sec": 0, 00:08:43.424 "r_mbytes_per_sec": 0, 00:08:43.424 "w_mbytes_per_sec": 0 00:08:43.424 }, 00:08:43.424 "claimed": false, 00:08:43.424 "zoned": false, 00:08:43.424 "supported_io_types": { 00:08:43.424 "read": true, 00:08:43.424 "write": true, 00:08:43.424 "unmap": true, 00:08:43.424 "flush": true, 00:08:43.424 "reset": true, 00:08:43.424 "nvme_admin": false, 00:08:43.424 "nvme_io": false, 00:08:43.424 "nvme_io_md": false, 00:08:43.424 "write_zeroes": true, 00:08:43.424 "zcopy": false, 00:08:43.424 "get_zone_info": false, 00:08:43.424 "zone_management": false, 00:08:43.424 "zone_append": false, 00:08:43.424 "compare": false, 00:08:43.424 "compare_and_write": false, 00:08:43.424 "abort": false, 00:08:43.424 "seek_hole": false, 00:08:43.424 "seek_data": false, 00:08:43.424 "copy": false, 00:08:43.424 "nvme_iov_md": false 00:08:43.424 }, 00:08:43.424 "memory_domains": [ 00:08:43.424 { 00:08:43.424 "dma_device_id": "system", 00:08:43.424 "dma_device_type": 1 00:08:43.424 }, 00:08:43.424 { 00:08:43.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.424 "dma_device_type": 2 00:08:43.424 }, 00:08:43.424 { 00:08:43.424 "dma_device_id": "system", 00:08:43.424 "dma_device_type": 1 00:08:43.424 }, 00:08:43.424 { 00:08:43.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.424 "dma_device_type": 2 00:08:43.424 }, 00:08:43.424 { 00:08:43.424 "dma_device_id": "system", 00:08:43.424 "dma_device_type": 1 00:08:43.424 }, 00:08:43.424 { 00:08:43.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.424 "dma_device_type": 2 00:08:43.425 }, 00:08:43.425 { 00:08:43.425 "dma_device_id": "system", 00:08:43.425 "dma_device_type": 1 00:08:43.425 }, 00:08:43.425 { 00:08:43.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.425 "dma_device_type": 2 00:08:43.425 } 00:08:43.425 ], 00:08:43.425 "driver_specific": { 00:08:43.425 "raid": { 00:08:43.425 "uuid": "b2685d6e-af70-428a-ab17-1e07dc3fee9f", 00:08:43.425 "strip_size_kb": 64, 00:08:43.425 "state": "online", 00:08:43.425 "raid_level": "raid0", 00:08:43.425 "superblock": false, 00:08:43.425 "num_base_bdevs": 4, 00:08:43.425 "num_base_bdevs_discovered": 4, 00:08:43.425 "num_base_bdevs_operational": 4, 00:08:43.425 "base_bdevs_list": [ 00:08:43.425 { 00:08:43.425 "name": "NewBaseBdev", 00:08:43.425 "uuid": "17daf977-f371-49be-91d4-3029942a7345", 00:08:43.425 "is_configured": true, 00:08:43.425 "data_offset": 0, 00:08:43.425 "data_size": 65536 00:08:43.425 }, 00:08:43.425 { 00:08:43.425 "name": "BaseBdev2", 00:08:43.425 "uuid": "d8105fe9-ee89-4ed6-8685-68e84fc32d12", 00:08:43.425 "is_configured": true, 00:08:43.425 "data_offset": 0, 00:08:43.425 "data_size": 65536 00:08:43.425 }, 00:08:43.425 { 00:08:43.425 "name": "BaseBdev3", 00:08:43.425 "uuid": "ebdac896-e559-4caf-8de3-38035a82d479", 00:08:43.425 "is_configured": true, 00:08:43.425 "data_offset": 0, 00:08:43.425 "data_size": 65536 00:08:43.425 }, 00:08:43.425 { 00:08:43.425 "name": "BaseBdev4", 00:08:43.425 "uuid": "0deffebf-c21d-4267-b8a1-7e9b041e4de0", 00:08:43.425 "is_configured": true, 00:08:43.425 "data_offset": 0, 00:08:43.425 "data_size": 65536 00:08:43.425 } 00:08:43.425 ] 00:08:43.425 } 00:08:43.425 } 00:08:43.425 }' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:43.425 BaseBdev2 00:08:43.425 BaseBdev3 00:08:43.425 BaseBdev4' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.425 09:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.425 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.685 [2024-10-30 09:43:22.110263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.685 [2024-10-30 09:43:22.110288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.685 [2024-10-30 09:43:22.110355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.685 [2024-10-30 09:43:22.110419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.685 [2024-10-30 09:43:22.110428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67768 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67768 ']' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67768 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67768 00:08:43.685 killing process with pid 67768 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67768' 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67768 00:08:43.685 [2024-10-30 09:43:22.143095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.685 09:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67768 00:08:43.945 [2024-10-30 09:43:22.388138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.516 09:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.517 ************************************ 00:08:44.517 END TEST raid_state_function_test 00:08:44.517 ************************************ 00:08:44.517 00:08:44.517 real 0m8.367s 00:08:44.517 user 0m13.312s 00:08:44.517 sys 0m1.351s 00:08:44.517 09:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.517 09:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.779 09:43:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:08:44.779 09:43:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:44.779 09:43:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.779 09:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.779 ************************************ 00:08:44.779 START TEST raid_state_function_test_sb 00:08:44.779 ************************************ 00:08:44.779 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:08:44.779 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:44.779 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:44.779 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.779 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:44.780 Process raid pid: 68406 00:08:44.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68406 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68406' 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68406 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68406 ']' 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.780 09:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.780 [2024-10-30 09:43:23.247228] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:44.780 [2024-10-30 09:43:23.247402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.042 [2024-10-30 09:43:23.482690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.042 [2024-10-30 09:43:23.613616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.304 [2024-10-30 09:43:23.751938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.304 [2024-10-30 09:43:23.751977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.565 [2024-10-30 09:43:24.098813] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.565 [2024-10-30 09:43:24.098863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.565 [2024-10-30 09:43:24.098873] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.565 [2024-10-30 09:43:24.098882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.565 [2024-10-30 09:43:24.098889] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.565 [2024-10-30 09:43:24.098897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.565 [2024-10-30 09:43:24.098908] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:45.565 [2024-10-30 09:43:24.098917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.565 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.565 "name": "Existed_Raid", 00:08:45.565 "uuid": "dcd67e8b-beb9-4057-8b9d-1bcafb565586", 00:08:45.565 "strip_size_kb": 64, 00:08:45.565 "state": "configuring", 00:08:45.565 "raid_level": "raid0", 00:08:45.565 "superblock": true, 00:08:45.565 "num_base_bdevs": 4, 00:08:45.565 "num_base_bdevs_discovered": 0, 00:08:45.565 "num_base_bdevs_operational": 4, 00:08:45.565 "base_bdevs_list": [ 00:08:45.565 { 00:08:45.565 "name": "BaseBdev1", 00:08:45.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.565 "is_configured": false, 00:08:45.565 "data_offset": 0, 00:08:45.565 "data_size": 0 00:08:45.566 }, 00:08:45.566 { 00:08:45.566 "name": "BaseBdev2", 00:08:45.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.566 "is_configured": false, 00:08:45.566 "data_offset": 0, 00:08:45.566 "data_size": 0 00:08:45.566 }, 00:08:45.566 { 00:08:45.566 "name": "BaseBdev3", 00:08:45.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.566 "is_configured": false, 00:08:45.566 "data_offset": 0, 00:08:45.566 "data_size": 0 00:08:45.566 }, 00:08:45.566 { 00:08:45.566 "name": "BaseBdev4", 00:08:45.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.566 "is_configured": false, 00:08:45.566 "data_offset": 0, 00:08:45.566 "data_size": 0 00:08:45.566 } 00:08:45.566 ] 00:08:45.566 }' 00:08:45.566 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.566 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.828 [2024-10-30 09:43:24.410814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.828 [2024-10-30 09:43:24.410848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.828 [2024-10-30 09:43:24.418830] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.828 [2024-10-30 09:43:24.418867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.828 [2024-10-30 09:43:24.418876] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.828 [2024-10-30 09:43:24.418886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.828 [2024-10-30 09:43:24.418893] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.828 [2024-10-30 09:43:24.418902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.828 [2024-10-30 09:43:24.418909] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:45.828 [2024-10-30 09:43:24.418918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.828 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 [2024-10-30 09:43:24.451402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.090 BaseBdev1 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 [ 00:08:46.090 { 00:08:46.090 "name": "BaseBdev1", 00:08:46.090 "aliases": [ 00:08:46.090 "06e22521-f20a-431f-a2bb-735ef4ac316b" 00:08:46.090 ], 00:08:46.090 "product_name": "Malloc disk", 00:08:46.090 "block_size": 512, 00:08:46.090 "num_blocks": 65536, 00:08:46.090 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:46.090 "assigned_rate_limits": { 00:08:46.090 "rw_ios_per_sec": 0, 00:08:46.090 "rw_mbytes_per_sec": 0, 00:08:46.090 "r_mbytes_per_sec": 0, 00:08:46.090 "w_mbytes_per_sec": 0 00:08:46.090 }, 00:08:46.090 "claimed": true, 00:08:46.090 "claim_type": "exclusive_write", 00:08:46.090 "zoned": false, 00:08:46.090 "supported_io_types": { 00:08:46.090 "read": true, 00:08:46.090 "write": true, 00:08:46.090 "unmap": true, 00:08:46.090 "flush": true, 00:08:46.090 "reset": true, 00:08:46.090 "nvme_admin": false, 00:08:46.090 "nvme_io": false, 00:08:46.090 "nvme_io_md": false, 00:08:46.090 "write_zeroes": true, 00:08:46.090 "zcopy": true, 00:08:46.090 "get_zone_info": false, 00:08:46.090 "zone_management": false, 00:08:46.090 "zone_append": false, 00:08:46.090 "compare": false, 00:08:46.090 "compare_and_write": false, 00:08:46.090 "abort": true, 00:08:46.090 "seek_hole": false, 00:08:46.090 "seek_data": false, 00:08:46.090 "copy": true, 00:08:46.090 "nvme_iov_md": false 00:08:46.090 }, 00:08:46.090 "memory_domains": [ 00:08:46.090 { 00:08:46.090 "dma_device_id": "system", 00:08:46.090 "dma_device_type": 1 00:08:46.090 }, 00:08:46.090 { 00:08:46.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.090 "dma_device_type": 2 00:08:46.090 } 00:08:46.090 ], 00:08:46.090 "driver_specific": {} 00:08:46.090 } 00:08:46.090 ] 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:46.090 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.091 "name": "Existed_Raid", 00:08:46.091 "uuid": "146b0f15-dbe0-4e53-a2d0-57998017e0c4", 00:08:46.091 "strip_size_kb": 64, 00:08:46.091 "state": "configuring", 00:08:46.091 "raid_level": "raid0", 00:08:46.091 "superblock": true, 00:08:46.091 "num_base_bdevs": 4, 00:08:46.091 "num_base_bdevs_discovered": 1, 00:08:46.091 "num_base_bdevs_operational": 4, 00:08:46.091 "base_bdevs_list": [ 00:08:46.091 { 00:08:46.091 "name": "BaseBdev1", 00:08:46.091 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:46.091 "is_configured": true, 00:08:46.091 "data_offset": 2048, 00:08:46.091 "data_size": 63488 00:08:46.091 }, 00:08:46.091 { 00:08:46.091 "name": "BaseBdev2", 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.091 "is_configured": false, 00:08:46.091 "data_offset": 0, 00:08:46.091 "data_size": 0 00:08:46.091 }, 00:08:46.091 { 00:08:46.091 "name": "BaseBdev3", 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.091 "is_configured": false, 00:08:46.091 "data_offset": 0, 00:08:46.091 "data_size": 0 00:08:46.091 }, 00:08:46.091 { 00:08:46.091 "name": "BaseBdev4", 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.091 "is_configured": false, 00:08:46.091 "data_offset": 0, 00:08:46.091 "data_size": 0 00:08:46.091 } 00:08:46.091 ] 00:08:46.091 }' 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.091 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.353 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.353 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.353 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.353 [2024-10-30 09:43:24.799518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.354 [2024-10-30 09:43:24.799675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.354 [2024-10-30 09:43:24.807585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.354 [2024-10-30 09:43:24.809546] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.354 [2024-10-30 09:43:24.809666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.354 [2024-10-30 09:43:24.809723] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.354 [2024-10-30 09:43:24.809754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.354 [2024-10-30 09:43:24.809775] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:46.354 [2024-10-30 09:43:24.809798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.354 "name": "Existed_Raid", 00:08:46.354 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:46.354 "strip_size_kb": 64, 00:08:46.354 "state": "configuring", 00:08:46.354 "raid_level": "raid0", 00:08:46.354 "superblock": true, 00:08:46.354 "num_base_bdevs": 4, 00:08:46.354 "num_base_bdevs_discovered": 1, 00:08:46.354 "num_base_bdevs_operational": 4, 00:08:46.354 "base_bdevs_list": [ 00:08:46.354 { 00:08:46.354 "name": "BaseBdev1", 00:08:46.354 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:46.354 "is_configured": true, 00:08:46.354 "data_offset": 2048, 00:08:46.354 "data_size": 63488 00:08:46.354 }, 00:08:46.354 { 00:08:46.354 "name": "BaseBdev2", 00:08:46.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.354 "is_configured": false, 00:08:46.354 "data_offset": 0, 00:08:46.354 "data_size": 0 00:08:46.354 }, 00:08:46.354 { 00:08:46.354 "name": "BaseBdev3", 00:08:46.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.354 "is_configured": false, 00:08:46.354 "data_offset": 0, 00:08:46.354 "data_size": 0 00:08:46.354 }, 00:08:46.354 { 00:08:46.354 "name": "BaseBdev4", 00:08:46.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.354 "is_configured": false, 00:08:46.354 "data_offset": 0, 00:08:46.354 "data_size": 0 00:08:46.354 } 00:08:46.354 ] 00:08:46.354 }' 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.354 09:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.615 [2024-10-30 09:43:25.186489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.615 BaseBdev2 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.615 [ 00:08:46.615 { 00:08:46.615 "name": "BaseBdev2", 00:08:46.615 "aliases": [ 00:08:46.615 "2a71f801-b696-410a-afa2-177a7e839205" 00:08:46.615 ], 00:08:46.615 "product_name": "Malloc disk", 00:08:46.615 "block_size": 512, 00:08:46.615 "num_blocks": 65536, 00:08:46.615 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:46.615 "assigned_rate_limits": { 00:08:46.615 "rw_ios_per_sec": 0, 00:08:46.615 "rw_mbytes_per_sec": 0, 00:08:46.615 "r_mbytes_per_sec": 0, 00:08:46.615 "w_mbytes_per_sec": 0 00:08:46.615 }, 00:08:46.615 "claimed": true, 00:08:46.615 "claim_type": "exclusive_write", 00:08:46.615 "zoned": false, 00:08:46.615 "supported_io_types": { 00:08:46.615 "read": true, 00:08:46.615 "write": true, 00:08:46.615 "unmap": true, 00:08:46.615 "flush": true, 00:08:46.615 "reset": true, 00:08:46.615 "nvme_admin": false, 00:08:46.615 "nvme_io": false, 00:08:46.615 "nvme_io_md": false, 00:08:46.615 "write_zeroes": true, 00:08:46.615 "zcopy": true, 00:08:46.615 "get_zone_info": false, 00:08:46.615 "zone_management": false, 00:08:46.615 "zone_append": false, 00:08:46.615 "compare": false, 00:08:46.615 "compare_and_write": false, 00:08:46.615 "abort": true, 00:08:46.615 "seek_hole": false, 00:08:46.615 "seek_data": false, 00:08:46.615 "copy": true, 00:08:46.615 "nvme_iov_md": false 00:08:46.615 }, 00:08:46.615 "memory_domains": [ 00:08:46.615 { 00:08:46.615 "dma_device_id": "system", 00:08:46.615 "dma_device_type": 1 00:08:46.615 }, 00:08:46.615 { 00:08:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.615 "dma_device_type": 2 00:08:46.615 } 00:08:46.615 ], 00:08:46.615 "driver_specific": {} 00:08:46.615 } 00:08:46.615 ] 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.615 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.878 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.878 "name": "Existed_Raid", 00:08:46.878 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:46.878 "strip_size_kb": 64, 00:08:46.878 "state": "configuring", 00:08:46.878 "raid_level": "raid0", 00:08:46.878 "superblock": true, 00:08:46.878 "num_base_bdevs": 4, 00:08:46.878 "num_base_bdevs_discovered": 2, 00:08:46.878 "num_base_bdevs_operational": 4, 00:08:46.878 "base_bdevs_list": [ 00:08:46.878 { 00:08:46.878 "name": "BaseBdev1", 00:08:46.878 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:46.878 "is_configured": true, 00:08:46.878 "data_offset": 2048, 00:08:46.878 "data_size": 63488 00:08:46.878 }, 00:08:46.878 { 00:08:46.878 "name": "BaseBdev2", 00:08:46.878 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:46.878 "is_configured": true, 00:08:46.878 "data_offset": 2048, 00:08:46.878 "data_size": 63488 00:08:46.878 }, 00:08:46.878 { 00:08:46.878 "name": "BaseBdev3", 00:08:46.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.878 "is_configured": false, 00:08:46.878 "data_offset": 0, 00:08:46.878 "data_size": 0 00:08:46.878 }, 00:08:46.878 { 00:08:46.878 "name": "BaseBdev4", 00:08:46.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.878 "is_configured": false, 00:08:46.878 "data_offset": 0, 00:08:46.878 "data_size": 0 00:08:46.878 } 00:08:46.878 ] 00:08:46.878 }' 00:08:46.878 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.878 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.139 [2024-10-30 09:43:25.552222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.139 BaseBdev3 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.139 [ 00:08:47.139 { 00:08:47.139 "name": "BaseBdev3", 00:08:47.139 "aliases": [ 00:08:47.139 "6adc1047-df1b-4d5d-8349-c1a8a3587a28" 00:08:47.139 ], 00:08:47.139 "product_name": "Malloc disk", 00:08:47.139 "block_size": 512, 00:08:47.139 "num_blocks": 65536, 00:08:47.139 "uuid": "6adc1047-df1b-4d5d-8349-c1a8a3587a28", 00:08:47.139 "assigned_rate_limits": { 00:08:47.139 "rw_ios_per_sec": 0, 00:08:47.139 "rw_mbytes_per_sec": 0, 00:08:47.139 "r_mbytes_per_sec": 0, 00:08:47.139 "w_mbytes_per_sec": 0 00:08:47.139 }, 00:08:47.139 "claimed": true, 00:08:47.139 "claim_type": "exclusive_write", 00:08:47.139 "zoned": false, 00:08:47.139 "supported_io_types": { 00:08:47.139 "read": true, 00:08:47.139 "write": true, 00:08:47.139 "unmap": true, 00:08:47.139 "flush": true, 00:08:47.139 "reset": true, 00:08:47.139 "nvme_admin": false, 00:08:47.139 "nvme_io": false, 00:08:47.139 "nvme_io_md": false, 00:08:47.139 "write_zeroes": true, 00:08:47.139 "zcopy": true, 00:08:47.139 "get_zone_info": false, 00:08:47.139 "zone_management": false, 00:08:47.139 "zone_append": false, 00:08:47.139 "compare": false, 00:08:47.139 "compare_and_write": false, 00:08:47.139 "abort": true, 00:08:47.139 "seek_hole": false, 00:08:47.139 "seek_data": false, 00:08:47.139 "copy": true, 00:08:47.139 "nvme_iov_md": false 00:08:47.139 }, 00:08:47.139 "memory_domains": [ 00:08:47.139 { 00:08:47.139 "dma_device_id": "system", 00:08:47.139 "dma_device_type": 1 00:08:47.139 }, 00:08:47.139 { 00:08:47.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.139 "dma_device_type": 2 00:08:47.139 } 00:08:47.139 ], 00:08:47.139 "driver_specific": {} 00:08:47.139 } 00:08:47.139 ] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.139 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.139 "name": "Existed_Raid", 00:08:47.139 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:47.139 "strip_size_kb": 64, 00:08:47.139 "state": "configuring", 00:08:47.139 "raid_level": "raid0", 00:08:47.139 "superblock": true, 00:08:47.139 "num_base_bdevs": 4, 00:08:47.139 "num_base_bdevs_discovered": 3, 00:08:47.139 "num_base_bdevs_operational": 4, 00:08:47.139 "base_bdevs_list": [ 00:08:47.139 { 00:08:47.139 "name": "BaseBdev1", 00:08:47.139 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:47.139 "is_configured": true, 00:08:47.139 "data_offset": 2048, 00:08:47.139 "data_size": 63488 00:08:47.139 }, 00:08:47.139 { 00:08:47.139 "name": "BaseBdev2", 00:08:47.139 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:47.139 "is_configured": true, 00:08:47.139 "data_offset": 2048, 00:08:47.139 "data_size": 63488 00:08:47.139 }, 00:08:47.139 { 00:08:47.139 "name": "BaseBdev3", 00:08:47.139 "uuid": "6adc1047-df1b-4d5d-8349-c1a8a3587a28", 00:08:47.139 "is_configured": true, 00:08:47.140 "data_offset": 2048, 00:08:47.140 "data_size": 63488 00:08:47.140 }, 00:08:47.140 { 00:08:47.140 "name": "BaseBdev4", 00:08:47.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.140 "is_configured": false, 00:08:47.140 "data_offset": 0, 00:08:47.140 "data_size": 0 00:08:47.140 } 00:08:47.140 ] 00:08:47.140 }' 00:08:47.140 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.140 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.400 [2024-10-30 09:43:25.915081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:47.400 [2024-10-30 09:43:25.915451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.400 [2024-10-30 09:43:25.915492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:47.400 [2024-10-30 09:43:25.915826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:47.400 [2024-10-30 09:43:25.916042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.400 [2024-10-30 09:43:25.916147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:08:47.400 id_bdev 0x617000007e80 00:08:47.400 [2024-10-30 09:43:25.916349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.400 [ 00:08:47.400 { 00:08:47.400 "name": "BaseBdev4", 00:08:47.400 "aliases": [ 00:08:47.400 "3490eb23-efcd-40c2-8c07-246cec32d145" 00:08:47.400 ], 00:08:47.400 "product_name": "Malloc disk", 00:08:47.400 "block_size": 512, 00:08:47.400 "num_blocks": 65536, 00:08:47.400 "uuid": "3490eb23-efcd-40c2-8c07-246cec32d145", 00:08:47.400 "assigned_rate_limits": { 00:08:47.400 "rw_ios_per_sec": 0, 00:08:47.400 "rw_mbytes_per_sec": 0, 00:08:47.400 "r_mbytes_per_sec": 0, 00:08:47.400 "w_mbytes_per_sec": 0 00:08:47.400 }, 00:08:47.400 "claimed": true, 00:08:47.400 "claim_type": "exclusive_write", 00:08:47.400 "zoned": false, 00:08:47.400 "supported_io_types": { 00:08:47.400 "read": true, 00:08:47.400 "write": true, 00:08:47.400 "unmap": true, 00:08:47.400 "flush": true, 00:08:47.400 "reset": true, 00:08:47.400 "nvme_admin": false, 00:08:47.400 "nvme_io": false, 00:08:47.400 "nvme_io_md": false, 00:08:47.400 "write_zeroes": true, 00:08:47.400 "zcopy": true, 00:08:47.400 "get_zone_info": false, 00:08:47.400 "zone_management": false, 00:08:47.400 "zone_append": false, 00:08:47.400 "compare": false, 00:08:47.400 "compare_and_write": false, 00:08:47.400 "abort": true, 00:08:47.400 "seek_hole": false, 00:08:47.400 "seek_data": false, 00:08:47.400 "copy": true, 00:08:47.400 "nvme_iov_md": false 00:08:47.400 }, 00:08:47.400 "memory_domains": [ 00:08:47.400 { 00:08:47.400 "dma_device_id": "system", 00:08:47.400 "dma_device_type": 1 00:08:47.400 }, 00:08:47.400 { 00:08:47.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.400 "dma_device_type": 2 00:08:47.400 } 00:08:47.400 ], 00:08:47.400 "driver_specific": {} 00:08:47.400 } 00:08:47.400 ] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.400 "name": "Existed_Raid", 00:08:47.400 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:47.400 "strip_size_kb": 64, 00:08:47.400 "state": "online", 00:08:47.400 "raid_level": "raid0", 00:08:47.400 "superblock": true, 00:08:47.400 "num_base_bdevs": 4, 00:08:47.400 "num_base_bdevs_discovered": 4, 00:08:47.400 "num_base_bdevs_operational": 4, 00:08:47.400 "base_bdevs_list": [ 00:08:47.400 { 00:08:47.400 "name": "BaseBdev1", 00:08:47.400 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:47.400 "is_configured": true, 00:08:47.400 "data_offset": 2048, 00:08:47.400 "data_size": 63488 00:08:47.400 }, 00:08:47.400 { 00:08:47.400 "name": "BaseBdev2", 00:08:47.400 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:47.400 "is_configured": true, 00:08:47.400 "data_offset": 2048, 00:08:47.400 "data_size": 63488 00:08:47.400 }, 00:08:47.400 { 00:08:47.400 "name": "BaseBdev3", 00:08:47.400 "uuid": "6adc1047-df1b-4d5d-8349-c1a8a3587a28", 00:08:47.400 "is_configured": true, 00:08:47.400 "data_offset": 2048, 00:08:47.400 "data_size": 63488 00:08:47.400 }, 00:08:47.400 { 00:08:47.400 "name": "BaseBdev4", 00:08:47.400 "uuid": "3490eb23-efcd-40c2-8c07-246cec32d145", 00:08:47.400 "is_configured": true, 00:08:47.400 "data_offset": 2048, 00:08:47.400 "data_size": 63488 00:08:47.400 } 00:08:47.400 ] 00:08:47.400 }' 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.400 09:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.661 [2024-10-30 09:43:26.275578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.922 "name": "Existed_Raid", 00:08:47.922 "aliases": [ 00:08:47.922 "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b" 00:08:47.922 ], 00:08:47.922 "product_name": "Raid Volume", 00:08:47.922 "block_size": 512, 00:08:47.922 "num_blocks": 253952, 00:08:47.922 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:47.922 "assigned_rate_limits": { 00:08:47.922 "rw_ios_per_sec": 0, 00:08:47.922 "rw_mbytes_per_sec": 0, 00:08:47.922 "r_mbytes_per_sec": 0, 00:08:47.922 "w_mbytes_per_sec": 0 00:08:47.922 }, 00:08:47.922 "claimed": false, 00:08:47.922 "zoned": false, 00:08:47.922 "supported_io_types": { 00:08:47.922 "read": true, 00:08:47.922 "write": true, 00:08:47.922 "unmap": true, 00:08:47.922 "flush": true, 00:08:47.922 "reset": true, 00:08:47.922 "nvme_admin": false, 00:08:47.922 "nvme_io": false, 00:08:47.922 "nvme_io_md": false, 00:08:47.922 "write_zeroes": true, 00:08:47.922 "zcopy": false, 00:08:47.922 "get_zone_info": false, 00:08:47.922 "zone_management": false, 00:08:47.922 "zone_append": false, 00:08:47.922 "compare": false, 00:08:47.922 "compare_and_write": false, 00:08:47.922 "abort": false, 00:08:47.922 "seek_hole": false, 00:08:47.922 "seek_data": false, 00:08:47.922 "copy": false, 00:08:47.922 "nvme_iov_md": false 00:08:47.922 }, 00:08:47.922 "memory_domains": [ 00:08:47.922 { 00:08:47.922 "dma_device_id": "system", 00:08:47.922 "dma_device_type": 1 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.922 "dma_device_type": 2 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "system", 00:08:47.922 "dma_device_type": 1 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.922 "dma_device_type": 2 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "system", 00:08:47.922 "dma_device_type": 1 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.922 "dma_device_type": 2 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "system", 00:08:47.922 "dma_device_type": 1 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.922 "dma_device_type": 2 00:08:47.922 } 00:08:47.922 ], 00:08:47.922 "driver_specific": { 00:08:47.922 "raid": { 00:08:47.922 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:47.922 "strip_size_kb": 64, 00:08:47.922 "state": "online", 00:08:47.922 "raid_level": "raid0", 00:08:47.922 "superblock": true, 00:08:47.922 "num_base_bdevs": 4, 00:08:47.922 "num_base_bdevs_discovered": 4, 00:08:47.922 "num_base_bdevs_operational": 4, 00:08:47.922 "base_bdevs_list": [ 00:08:47.922 { 00:08:47.922 "name": "BaseBdev1", 00:08:47.922 "uuid": "06e22521-f20a-431f-a2bb-735ef4ac316b", 00:08:47.922 "is_configured": true, 00:08:47.922 "data_offset": 2048, 00:08:47.922 "data_size": 63488 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "name": "BaseBdev2", 00:08:47.922 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:47.922 "is_configured": true, 00:08:47.922 "data_offset": 2048, 00:08:47.922 "data_size": 63488 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "name": "BaseBdev3", 00:08:47.922 "uuid": "6adc1047-df1b-4d5d-8349-c1a8a3587a28", 00:08:47.922 "is_configured": true, 00:08:47.922 "data_offset": 2048, 00:08:47.922 "data_size": 63488 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "name": "BaseBdev4", 00:08:47.922 "uuid": "3490eb23-efcd-40c2-8c07-246cec32d145", 00:08:47.922 "is_configured": true, 00:08:47.922 "data_offset": 2048, 00:08:47.922 "data_size": 63488 00:08:47.922 } 00:08:47.922 ] 00:08:47.922 } 00:08:47.922 } 00:08:47.922 }' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.922 BaseBdev2 00:08:47.922 BaseBdev3 00:08:47.922 BaseBdev4' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.922 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 [2024-10-30 09:43:26.495334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.922 [2024-10-30 09:43:26.495361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.922 [2024-10-30 09:43:26.495409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.183 "name": "Existed_Raid", 00:08:48.183 "uuid": "46b14f7d-2ccf-43eb-b13b-107f33f0fd9b", 00:08:48.183 "strip_size_kb": 64, 00:08:48.183 "state": "offline", 00:08:48.183 "raid_level": "raid0", 00:08:48.183 "superblock": true, 00:08:48.183 "num_base_bdevs": 4, 00:08:48.183 "num_base_bdevs_discovered": 3, 00:08:48.183 "num_base_bdevs_operational": 3, 00:08:48.183 "base_bdevs_list": [ 00:08:48.183 { 00:08:48.183 "name": null, 00:08:48.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.183 "is_configured": false, 00:08:48.183 "data_offset": 0, 00:08:48.183 "data_size": 63488 00:08:48.183 }, 00:08:48.183 { 00:08:48.183 "name": "BaseBdev2", 00:08:48.183 "uuid": "2a71f801-b696-410a-afa2-177a7e839205", 00:08:48.183 "is_configured": true, 00:08:48.183 "data_offset": 2048, 00:08:48.183 "data_size": 63488 00:08:48.183 }, 00:08:48.183 { 00:08:48.183 "name": "BaseBdev3", 00:08:48.183 "uuid": "6adc1047-df1b-4d5d-8349-c1a8a3587a28", 00:08:48.183 "is_configured": true, 00:08:48.183 "data_offset": 2048, 00:08:48.183 "data_size": 63488 00:08:48.183 }, 00:08:48.183 { 00:08:48.183 "name": "BaseBdev4", 00:08:48.183 "uuid": "3490eb23-efcd-40c2-8c07-246cec32d145", 00:08:48.183 "is_configured": true, 00:08:48.183 "data_offset": 2048, 00:08:48.183 "data_size": 63488 00:08:48.183 } 00:08:48.183 ] 00:08:48.183 }' 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.183 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.443 09:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.443 [2024-10-30 09:43:26.954527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.443 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.443 [2024-10-30 09:43:27.056345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.704 [2024-10-30 09:43:27.154659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:48.704 [2024-10-30 09:43:27.154704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.704 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.704 BaseBdev2 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.705 [ 00:08:48.705 { 00:08:48.705 "name": "BaseBdev2", 00:08:48.705 "aliases": [ 00:08:48.705 "d0d90fae-011b-4b67-9db6-efab775adaf5" 00:08:48.705 ], 00:08:48.705 "product_name": "Malloc disk", 00:08:48.705 "block_size": 512, 00:08:48.705 "num_blocks": 65536, 00:08:48.705 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:48.705 "assigned_rate_limits": { 00:08:48.705 "rw_ios_per_sec": 0, 00:08:48.705 "rw_mbytes_per_sec": 0, 00:08:48.705 "r_mbytes_per_sec": 0, 00:08:48.705 "w_mbytes_per_sec": 0 00:08:48.705 }, 00:08:48.705 "claimed": false, 00:08:48.705 "zoned": false, 00:08:48.705 "supported_io_types": { 00:08:48.705 "read": true, 00:08:48.705 "write": true, 00:08:48.705 "unmap": true, 00:08:48.705 "flush": true, 00:08:48.705 "reset": true, 00:08:48.705 "nvme_admin": false, 00:08:48.705 "nvme_io": false, 00:08:48.705 "nvme_io_md": false, 00:08:48.705 "write_zeroes": true, 00:08:48.705 "zcopy": true, 00:08:48.705 "get_zone_info": false, 00:08:48.705 "zone_management": false, 00:08:48.705 "zone_append": false, 00:08:48.705 "compare": false, 00:08:48.705 "compare_and_write": false, 00:08:48.705 "abort": true, 00:08:48.705 "seek_hole": false, 00:08:48.705 "seek_data": false, 00:08:48.705 "copy": true, 00:08:48.705 "nvme_iov_md": false 00:08:48.705 }, 00:08:48.705 "memory_domains": [ 00:08:48.705 { 00:08:48.705 "dma_device_id": "system", 00:08:48.705 "dma_device_type": 1 00:08:48.705 }, 00:08:48.705 { 00:08:48.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.705 "dma_device_type": 2 00:08:48.705 } 00:08:48.705 ], 00:08:48.705 "driver_specific": {} 00:08:48.705 } 00:08:48.705 ] 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.705 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.965 BaseBdev3 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.965 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.965 [ 00:08:48.965 { 00:08:48.965 "name": "BaseBdev3", 00:08:48.965 "aliases": [ 00:08:48.965 "211aa9dc-d842-40e2-9ab9-eda66da3143d" 00:08:48.965 ], 00:08:48.965 "product_name": "Malloc disk", 00:08:48.965 "block_size": 512, 00:08:48.965 "num_blocks": 65536, 00:08:48.965 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:48.965 "assigned_rate_limits": { 00:08:48.965 "rw_ios_per_sec": 0, 00:08:48.965 "rw_mbytes_per_sec": 0, 00:08:48.965 "r_mbytes_per_sec": 0, 00:08:48.965 "w_mbytes_per_sec": 0 00:08:48.965 }, 00:08:48.965 "claimed": false, 00:08:48.965 "zoned": false, 00:08:48.965 "supported_io_types": { 00:08:48.965 "read": true, 00:08:48.965 "write": true, 00:08:48.965 "unmap": true, 00:08:48.965 "flush": true, 00:08:48.965 "reset": true, 00:08:48.965 "nvme_admin": false, 00:08:48.965 "nvme_io": false, 00:08:48.965 "nvme_io_md": false, 00:08:48.965 "write_zeroes": true, 00:08:48.965 "zcopy": true, 00:08:48.965 "get_zone_info": false, 00:08:48.965 "zone_management": false, 00:08:48.965 "zone_append": false, 00:08:48.965 "compare": false, 00:08:48.965 "compare_and_write": false, 00:08:48.965 "abort": true, 00:08:48.965 "seek_hole": false, 00:08:48.965 "seek_data": false, 00:08:48.965 "copy": true, 00:08:48.965 "nvme_iov_md": false 00:08:48.965 }, 00:08:48.965 "memory_domains": [ 00:08:48.965 { 00:08:48.966 "dma_device_id": "system", 00:08:48.966 "dma_device_type": 1 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.966 "dma_device_type": 2 00:08:48.966 } 00:08:48.966 ], 00:08:48.966 "driver_specific": {} 00:08:48.966 } 00:08:48.966 ] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 BaseBdev4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 [ 00:08:48.966 { 00:08:48.966 "name": "BaseBdev4", 00:08:48.966 "aliases": [ 00:08:48.966 "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e" 00:08:48.966 ], 00:08:48.966 "product_name": "Malloc disk", 00:08:48.966 "block_size": 512, 00:08:48.966 "num_blocks": 65536, 00:08:48.966 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:48.966 "assigned_rate_limits": { 00:08:48.966 "rw_ios_per_sec": 0, 00:08:48.966 "rw_mbytes_per_sec": 0, 00:08:48.966 "r_mbytes_per_sec": 0, 00:08:48.966 "w_mbytes_per_sec": 0 00:08:48.966 }, 00:08:48.966 "claimed": false, 00:08:48.966 "zoned": false, 00:08:48.966 "supported_io_types": { 00:08:48.966 "read": true, 00:08:48.966 "write": true, 00:08:48.966 "unmap": true, 00:08:48.966 "flush": true, 00:08:48.966 "reset": true, 00:08:48.966 "nvme_admin": false, 00:08:48.966 "nvme_io": false, 00:08:48.966 "nvme_io_md": false, 00:08:48.966 "write_zeroes": true, 00:08:48.966 "zcopy": true, 00:08:48.966 "get_zone_info": false, 00:08:48.966 "zone_management": false, 00:08:48.966 "zone_append": false, 00:08:48.966 "compare": false, 00:08:48.966 "compare_and_write": false, 00:08:48.966 "abort": true, 00:08:48.966 "seek_hole": false, 00:08:48.966 "seek_data": false, 00:08:48.966 "copy": true, 00:08:48.966 "nvme_iov_md": false 00:08:48.966 }, 00:08:48.966 "memory_domains": [ 00:08:48.966 { 00:08:48.966 "dma_device_id": "system", 00:08:48.966 "dma_device_type": 1 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.966 "dma_device_type": 2 00:08:48.966 } 00:08:48.966 ], 00:08:48.966 "driver_specific": {} 00:08:48.966 } 00:08:48.966 ] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 [2024-10-30 09:43:27.423504] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.966 [2024-10-30 09:43:27.423648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.966 [2024-10-30 09:43:27.423680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.966 [2024-10-30 09:43:27.425564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.966 [2024-10-30 09:43:27.425613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.966 "name": "Existed_Raid", 00:08:48.966 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:48.966 "strip_size_kb": 64, 00:08:48.966 "state": "configuring", 00:08:48.966 "raid_level": "raid0", 00:08:48.966 "superblock": true, 00:08:48.966 "num_base_bdevs": 4, 00:08:48.966 "num_base_bdevs_discovered": 3, 00:08:48.966 "num_base_bdevs_operational": 4, 00:08:48.966 "base_bdevs_list": [ 00:08:48.966 { 00:08:48.966 "name": "BaseBdev1", 00:08:48.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.966 "is_configured": false, 00:08:48.966 "data_offset": 0, 00:08:48.966 "data_size": 0 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "name": "BaseBdev2", 00:08:48.966 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "name": "BaseBdev3", 00:08:48.966 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 }, 00:08:48.966 { 00:08:48.966 "name": "BaseBdev4", 00:08:48.966 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:48.966 "is_configured": true, 00:08:48.966 "data_offset": 2048, 00:08:48.966 "data_size": 63488 00:08:48.966 } 00:08:48.966 ] 00:08:48.966 }' 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.966 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.228 [2024-10-30 09:43:27.755568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.228 "name": "Existed_Raid", 00:08:49.228 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:49.228 "strip_size_kb": 64, 00:08:49.228 "state": "configuring", 00:08:49.228 "raid_level": "raid0", 00:08:49.228 "superblock": true, 00:08:49.228 "num_base_bdevs": 4, 00:08:49.228 "num_base_bdevs_discovered": 2, 00:08:49.228 "num_base_bdevs_operational": 4, 00:08:49.228 "base_bdevs_list": [ 00:08:49.228 { 00:08:49.228 "name": "BaseBdev1", 00:08:49.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.228 "is_configured": false, 00:08:49.228 "data_offset": 0, 00:08:49.228 "data_size": 0 00:08:49.228 }, 00:08:49.228 { 00:08:49.228 "name": null, 00:08:49.228 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:49.228 "is_configured": false, 00:08:49.228 "data_offset": 0, 00:08:49.228 "data_size": 63488 00:08:49.228 }, 00:08:49.228 { 00:08:49.228 "name": "BaseBdev3", 00:08:49.228 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:49.228 "is_configured": true, 00:08:49.228 "data_offset": 2048, 00:08:49.228 "data_size": 63488 00:08:49.228 }, 00:08:49.228 { 00:08:49.228 "name": "BaseBdev4", 00:08:49.228 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:49.228 "is_configured": true, 00:08:49.228 "data_offset": 2048, 00:08:49.228 "data_size": 63488 00:08:49.228 } 00:08:49.228 ] 00:08:49.228 }' 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.228 09:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.489 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.750 [2024-10-30 09:43:28.122187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.750 BaseBdev1 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.750 [ 00:08:49.750 { 00:08:49.750 "name": "BaseBdev1", 00:08:49.750 "aliases": [ 00:08:49.750 "2b5095f9-3815-4307-9147-30bebd3b1ec2" 00:08:49.750 ], 00:08:49.750 "product_name": "Malloc disk", 00:08:49.750 "block_size": 512, 00:08:49.750 "num_blocks": 65536, 00:08:49.750 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:49.750 "assigned_rate_limits": { 00:08:49.750 "rw_ios_per_sec": 0, 00:08:49.750 "rw_mbytes_per_sec": 0, 00:08:49.750 "r_mbytes_per_sec": 0, 00:08:49.750 "w_mbytes_per_sec": 0 00:08:49.750 }, 00:08:49.750 "claimed": true, 00:08:49.750 "claim_type": "exclusive_write", 00:08:49.750 "zoned": false, 00:08:49.750 "supported_io_types": { 00:08:49.750 "read": true, 00:08:49.750 "write": true, 00:08:49.750 "unmap": true, 00:08:49.750 "flush": true, 00:08:49.750 "reset": true, 00:08:49.750 "nvme_admin": false, 00:08:49.750 "nvme_io": false, 00:08:49.750 "nvme_io_md": false, 00:08:49.750 "write_zeroes": true, 00:08:49.750 "zcopy": true, 00:08:49.750 "get_zone_info": false, 00:08:49.750 "zone_management": false, 00:08:49.750 "zone_append": false, 00:08:49.750 "compare": false, 00:08:49.750 "compare_and_write": false, 00:08:49.750 "abort": true, 00:08:49.750 "seek_hole": false, 00:08:49.750 "seek_data": false, 00:08:49.750 "copy": true, 00:08:49.750 "nvme_iov_md": false 00:08:49.750 }, 00:08:49.750 "memory_domains": [ 00:08:49.750 { 00:08:49.750 "dma_device_id": "system", 00:08:49.750 "dma_device_type": 1 00:08:49.750 }, 00:08:49.750 { 00:08:49.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.750 "dma_device_type": 2 00:08:49.750 } 00:08:49.750 ], 00:08:49.750 "driver_specific": {} 00:08:49.750 } 00:08:49.750 ] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.750 "name": "Existed_Raid", 00:08:49.750 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:49.750 "strip_size_kb": 64, 00:08:49.750 "state": "configuring", 00:08:49.750 "raid_level": "raid0", 00:08:49.750 "superblock": true, 00:08:49.750 "num_base_bdevs": 4, 00:08:49.750 "num_base_bdevs_discovered": 3, 00:08:49.750 "num_base_bdevs_operational": 4, 00:08:49.750 "base_bdevs_list": [ 00:08:49.750 { 00:08:49.750 "name": "BaseBdev1", 00:08:49.750 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:49.750 "is_configured": true, 00:08:49.750 "data_offset": 2048, 00:08:49.750 "data_size": 63488 00:08:49.750 }, 00:08:49.750 { 00:08:49.750 "name": null, 00:08:49.750 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:49.750 "is_configured": false, 00:08:49.750 "data_offset": 0, 00:08:49.750 "data_size": 63488 00:08:49.750 }, 00:08:49.750 { 00:08:49.750 "name": "BaseBdev3", 00:08:49.750 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:49.750 "is_configured": true, 00:08:49.750 "data_offset": 2048, 00:08:49.750 "data_size": 63488 00:08:49.750 }, 00:08:49.750 { 00:08:49.750 "name": "BaseBdev4", 00:08:49.750 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:49.750 "is_configured": true, 00:08:49.750 "data_offset": 2048, 00:08:49.750 "data_size": 63488 00:08:49.750 } 00:08:49.750 ] 00:08:49.750 }' 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.750 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.011 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.012 [2024-10-30 09:43:28.502350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.012 "name": "Existed_Raid", 00:08:50.012 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:50.012 "strip_size_kb": 64, 00:08:50.012 "state": "configuring", 00:08:50.012 "raid_level": "raid0", 00:08:50.012 "superblock": true, 00:08:50.012 "num_base_bdevs": 4, 00:08:50.012 "num_base_bdevs_discovered": 2, 00:08:50.012 "num_base_bdevs_operational": 4, 00:08:50.012 "base_bdevs_list": [ 00:08:50.012 { 00:08:50.012 "name": "BaseBdev1", 00:08:50.012 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:50.012 "is_configured": true, 00:08:50.012 "data_offset": 2048, 00:08:50.012 "data_size": 63488 00:08:50.012 }, 00:08:50.012 { 00:08:50.012 "name": null, 00:08:50.012 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:50.012 "is_configured": false, 00:08:50.012 "data_offset": 0, 00:08:50.012 "data_size": 63488 00:08:50.012 }, 00:08:50.012 { 00:08:50.012 "name": null, 00:08:50.012 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:50.012 "is_configured": false, 00:08:50.012 "data_offset": 0, 00:08:50.012 "data_size": 63488 00:08:50.012 }, 00:08:50.012 { 00:08:50.012 "name": "BaseBdev4", 00:08:50.012 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:50.012 "is_configured": true, 00:08:50.012 "data_offset": 2048, 00:08:50.012 "data_size": 63488 00:08:50.012 } 00:08:50.012 ] 00:08:50.012 }' 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.012 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.330 [2024-10-30 09:43:28.870423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.330 "name": "Existed_Raid", 00:08:50.330 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:50.330 "strip_size_kb": 64, 00:08:50.330 "state": "configuring", 00:08:50.330 "raid_level": "raid0", 00:08:50.330 "superblock": true, 00:08:50.330 "num_base_bdevs": 4, 00:08:50.330 "num_base_bdevs_discovered": 3, 00:08:50.330 "num_base_bdevs_operational": 4, 00:08:50.330 "base_bdevs_list": [ 00:08:50.330 { 00:08:50.330 "name": "BaseBdev1", 00:08:50.330 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:50.330 "is_configured": true, 00:08:50.330 "data_offset": 2048, 00:08:50.330 "data_size": 63488 00:08:50.330 }, 00:08:50.330 { 00:08:50.330 "name": null, 00:08:50.330 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:50.330 "is_configured": false, 00:08:50.330 "data_offset": 0, 00:08:50.330 "data_size": 63488 00:08:50.330 }, 00:08:50.330 { 00:08:50.330 "name": "BaseBdev3", 00:08:50.330 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:50.330 "is_configured": true, 00:08:50.330 "data_offset": 2048, 00:08:50.330 "data_size": 63488 00:08:50.330 }, 00:08:50.330 { 00:08:50.330 "name": "BaseBdev4", 00:08:50.330 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:50.330 "is_configured": true, 00:08:50.330 "data_offset": 2048, 00:08:50.330 "data_size": 63488 00:08:50.330 } 00:08:50.330 ] 00:08:50.330 }' 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.330 09:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.629 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.629 [2024-10-30 09:43:29.246539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.891 "name": "Existed_Raid", 00:08:50.891 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:50.891 "strip_size_kb": 64, 00:08:50.891 "state": "configuring", 00:08:50.891 "raid_level": "raid0", 00:08:50.891 "superblock": true, 00:08:50.891 "num_base_bdevs": 4, 00:08:50.891 "num_base_bdevs_discovered": 2, 00:08:50.891 "num_base_bdevs_operational": 4, 00:08:50.891 "base_bdevs_list": [ 00:08:50.891 { 00:08:50.891 "name": null, 00:08:50.891 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:50.891 "is_configured": false, 00:08:50.891 "data_offset": 0, 00:08:50.891 "data_size": 63488 00:08:50.891 }, 00:08:50.891 { 00:08:50.891 "name": null, 00:08:50.891 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:50.891 "is_configured": false, 00:08:50.891 "data_offset": 0, 00:08:50.891 "data_size": 63488 00:08:50.891 }, 00:08:50.891 { 00:08:50.891 "name": "BaseBdev3", 00:08:50.891 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:50.891 "is_configured": true, 00:08:50.891 "data_offset": 2048, 00:08:50.891 "data_size": 63488 00:08:50.891 }, 00:08:50.891 { 00:08:50.891 "name": "BaseBdev4", 00:08:50.891 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:50.891 "is_configured": true, 00:08:50.891 "data_offset": 2048, 00:08:50.891 "data_size": 63488 00:08:50.891 } 00:08:50.891 ] 00:08:50.891 }' 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.891 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.155 [2024-10-30 09:43:29.669754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.155 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.155 "name": "Existed_Raid", 00:08:51.155 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:51.155 "strip_size_kb": 64, 00:08:51.155 "state": "configuring", 00:08:51.155 "raid_level": "raid0", 00:08:51.155 "superblock": true, 00:08:51.155 "num_base_bdevs": 4, 00:08:51.155 "num_base_bdevs_discovered": 3, 00:08:51.155 "num_base_bdevs_operational": 4, 00:08:51.155 "base_bdevs_list": [ 00:08:51.155 { 00:08:51.155 "name": null, 00:08:51.155 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:51.155 "is_configured": false, 00:08:51.155 "data_offset": 0, 00:08:51.155 "data_size": 63488 00:08:51.155 }, 00:08:51.155 { 00:08:51.155 "name": "BaseBdev2", 00:08:51.155 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:51.155 "is_configured": true, 00:08:51.155 "data_offset": 2048, 00:08:51.155 "data_size": 63488 00:08:51.155 }, 00:08:51.155 { 00:08:51.155 "name": "BaseBdev3", 00:08:51.155 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:51.155 "is_configured": true, 00:08:51.155 "data_offset": 2048, 00:08:51.155 "data_size": 63488 00:08:51.155 }, 00:08:51.155 { 00:08:51.155 "name": "BaseBdev4", 00:08:51.155 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:51.155 "is_configured": true, 00:08:51.155 "data_offset": 2048, 00:08:51.155 "data_size": 63488 00:08:51.155 } 00:08:51.155 ] 00:08:51.155 }' 00:08:51.156 09:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.156 09:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.416 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.416 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.416 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.416 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.416 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b5095f9-3815-4307-9147-30bebd3b1ec2 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 [2024-10-30 09:43:30.104228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:51.678 [2024-10-30 09:43:30.104419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:51.678 [2024-10-30 09:43:30.104431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:51.678 [2024-10-30 09:43:30.104676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:51.678 NewBaseBdev 00:08:51.678 [2024-10-30 09:43:30.104810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:51.678 [2024-10-30 09:43:30.104822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:51.678 [2024-10-30 09:43:30.104934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 [ 00:08:51.678 { 00:08:51.678 "name": "NewBaseBdev", 00:08:51.678 "aliases": [ 00:08:51.678 "2b5095f9-3815-4307-9147-30bebd3b1ec2" 00:08:51.678 ], 00:08:51.678 "product_name": "Malloc disk", 00:08:51.678 "block_size": 512, 00:08:51.678 "num_blocks": 65536, 00:08:51.678 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:51.678 "assigned_rate_limits": { 00:08:51.678 "rw_ios_per_sec": 0, 00:08:51.678 "rw_mbytes_per_sec": 0, 00:08:51.678 "r_mbytes_per_sec": 0, 00:08:51.678 "w_mbytes_per_sec": 0 00:08:51.678 }, 00:08:51.678 "claimed": true, 00:08:51.678 "claim_type": "exclusive_write", 00:08:51.678 "zoned": false, 00:08:51.678 "supported_io_types": { 00:08:51.678 "read": true, 00:08:51.678 "write": true, 00:08:51.678 "unmap": true, 00:08:51.678 "flush": true, 00:08:51.678 "reset": true, 00:08:51.678 "nvme_admin": false, 00:08:51.678 "nvme_io": false, 00:08:51.678 "nvme_io_md": false, 00:08:51.678 "write_zeroes": true, 00:08:51.678 "zcopy": true, 00:08:51.678 "get_zone_info": false, 00:08:51.678 "zone_management": false, 00:08:51.678 "zone_append": false, 00:08:51.678 "compare": false, 00:08:51.678 "compare_and_write": false, 00:08:51.678 "abort": true, 00:08:51.678 "seek_hole": false, 00:08:51.678 "seek_data": false, 00:08:51.678 "copy": true, 00:08:51.678 "nvme_iov_md": false 00:08:51.678 }, 00:08:51.678 "memory_domains": [ 00:08:51.678 { 00:08:51.678 "dma_device_id": "system", 00:08:51.678 "dma_device_type": 1 00:08:51.678 }, 00:08:51.678 { 00:08:51.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.678 "dma_device_type": 2 00:08:51.678 } 00:08:51.678 ], 00:08:51.678 "driver_specific": {} 00:08:51.678 } 00:08:51.678 ] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.678 "name": "Existed_Raid", 00:08:51.678 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:51.678 "strip_size_kb": 64, 00:08:51.678 "state": "online", 00:08:51.678 "raid_level": "raid0", 00:08:51.678 "superblock": true, 00:08:51.678 "num_base_bdevs": 4, 00:08:51.678 "num_base_bdevs_discovered": 4, 00:08:51.678 "num_base_bdevs_operational": 4, 00:08:51.678 "base_bdevs_list": [ 00:08:51.678 { 00:08:51.678 "name": "NewBaseBdev", 00:08:51.678 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:51.678 "is_configured": true, 00:08:51.678 "data_offset": 2048, 00:08:51.678 "data_size": 63488 00:08:51.678 }, 00:08:51.678 { 00:08:51.678 "name": "BaseBdev2", 00:08:51.678 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:51.678 "is_configured": true, 00:08:51.678 "data_offset": 2048, 00:08:51.678 "data_size": 63488 00:08:51.678 }, 00:08:51.678 { 00:08:51.678 "name": "BaseBdev3", 00:08:51.678 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:51.678 "is_configured": true, 00:08:51.678 "data_offset": 2048, 00:08:51.678 "data_size": 63488 00:08:51.678 }, 00:08:51.678 { 00:08:51.678 "name": "BaseBdev4", 00:08:51.678 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:51.678 "is_configured": true, 00:08:51.678 "data_offset": 2048, 00:08:51.678 "data_size": 63488 00:08:51.678 } 00:08:51.678 ] 00:08:51.678 }' 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.678 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 [2024-10-30 09:43:30.476742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.938 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.938 "name": "Existed_Raid", 00:08:51.938 "aliases": [ 00:08:51.938 "33a1f414-b8f5-4a82-b42b-41ef4778b443" 00:08:51.938 ], 00:08:51.938 "product_name": "Raid Volume", 00:08:51.938 "block_size": 512, 00:08:51.938 "num_blocks": 253952, 00:08:51.938 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:51.938 "assigned_rate_limits": { 00:08:51.938 "rw_ios_per_sec": 0, 00:08:51.938 "rw_mbytes_per_sec": 0, 00:08:51.938 "r_mbytes_per_sec": 0, 00:08:51.938 "w_mbytes_per_sec": 0 00:08:51.938 }, 00:08:51.938 "claimed": false, 00:08:51.938 "zoned": false, 00:08:51.938 "supported_io_types": { 00:08:51.938 "read": true, 00:08:51.938 "write": true, 00:08:51.938 "unmap": true, 00:08:51.938 "flush": true, 00:08:51.938 "reset": true, 00:08:51.938 "nvme_admin": false, 00:08:51.938 "nvme_io": false, 00:08:51.938 "nvme_io_md": false, 00:08:51.938 "write_zeroes": true, 00:08:51.938 "zcopy": false, 00:08:51.938 "get_zone_info": false, 00:08:51.938 "zone_management": false, 00:08:51.938 "zone_append": false, 00:08:51.938 "compare": false, 00:08:51.938 "compare_and_write": false, 00:08:51.938 "abort": false, 00:08:51.938 "seek_hole": false, 00:08:51.938 "seek_data": false, 00:08:51.938 "copy": false, 00:08:51.938 "nvme_iov_md": false 00:08:51.938 }, 00:08:51.938 "memory_domains": [ 00:08:51.938 { 00:08:51.938 "dma_device_id": "system", 00:08:51.938 "dma_device_type": 1 00:08:51.938 }, 00:08:51.938 { 00:08:51.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.938 "dma_device_type": 2 00:08:51.938 }, 00:08:51.938 { 00:08:51.938 "dma_device_id": "system", 00:08:51.938 "dma_device_type": 1 00:08:51.938 }, 00:08:51.938 { 00:08:51.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.938 "dma_device_type": 2 00:08:51.938 }, 00:08:51.938 { 00:08:51.938 "dma_device_id": "system", 00:08:51.939 "dma_device_type": 1 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.939 "dma_device_type": 2 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "dma_device_id": "system", 00:08:51.939 "dma_device_type": 1 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.939 "dma_device_type": 2 00:08:51.939 } 00:08:51.939 ], 00:08:51.939 "driver_specific": { 00:08:51.939 "raid": { 00:08:51.939 "uuid": "33a1f414-b8f5-4a82-b42b-41ef4778b443", 00:08:51.939 "strip_size_kb": 64, 00:08:51.939 "state": "online", 00:08:51.939 "raid_level": "raid0", 00:08:51.939 "superblock": true, 00:08:51.939 "num_base_bdevs": 4, 00:08:51.939 "num_base_bdevs_discovered": 4, 00:08:51.939 "num_base_bdevs_operational": 4, 00:08:51.939 "base_bdevs_list": [ 00:08:51.939 { 00:08:51.939 "name": "NewBaseBdev", 00:08:51.939 "uuid": "2b5095f9-3815-4307-9147-30bebd3b1ec2", 00:08:51.939 "is_configured": true, 00:08:51.939 "data_offset": 2048, 00:08:51.939 "data_size": 63488 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "name": "BaseBdev2", 00:08:51.939 "uuid": "d0d90fae-011b-4b67-9db6-efab775adaf5", 00:08:51.939 "is_configured": true, 00:08:51.939 "data_offset": 2048, 00:08:51.939 "data_size": 63488 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "name": "BaseBdev3", 00:08:51.939 "uuid": "211aa9dc-d842-40e2-9ab9-eda66da3143d", 00:08:51.939 "is_configured": true, 00:08:51.939 "data_offset": 2048, 00:08:51.939 "data_size": 63488 00:08:51.939 }, 00:08:51.939 { 00:08:51.939 "name": "BaseBdev4", 00:08:51.939 "uuid": "b4bf1a7d-2fab-4e20-b4d5-6e4a7607d45e", 00:08:51.939 "is_configured": true, 00:08:51.939 "data_offset": 2048, 00:08:51.939 "data_size": 63488 00:08:51.939 } 00:08:51.939 ] 00:08:51.939 } 00:08:51.939 } 00:08:51.939 }' 00:08:51.939 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.939 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.939 BaseBdev2 00:08:51.939 BaseBdev3 00:08:51.939 BaseBdev4' 00:08:51.939 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.200 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.200 [2024-10-30 09:43:30.708424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.201 [2024-10-30 09:43:30.708450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.201 [2024-10-30 09:43:30.708518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.201 [2024-10-30 09:43:30.708581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.201 [2024-10-30 09:43:30.708591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68406 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68406 ']' 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68406 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68406 00:08:52.201 killing process with pid 68406 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68406' 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68406 00:08:52.201 [2024-10-30 09:43:30.740039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.201 09:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68406 00:08:52.461 [2024-10-30 09:43:30.982394] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.412 09:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.412 00:08:53.412 real 0m8.502s 00:08:53.412 user 0m13.570s 00:08:53.412 sys 0m1.410s 00:08:53.412 09:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.412 09:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.412 ************************************ 00:08:53.412 END TEST raid_state_function_test_sb 00:08:53.412 ************************************ 00:08:53.412 09:43:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:08:53.412 09:43:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:53.412 09:43:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.412 09:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.412 ************************************ 00:08:53.412 START TEST raid_superblock_test 00:08:53.412 ************************************ 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69049 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69049 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 69049 ']' 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.412 09:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.412 [2024-10-30 09:43:31.803116] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:53.412 [2024-10-30 09:43:31.803239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ] 00:08:53.412 [2024-10-30 09:43:31.966386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.673 [2024-10-30 09:43:32.071751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.673 [2024-10-30 09:43:32.211237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.673 [2024-10-30 09:43:32.211294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 malloc1 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 [2024-10-30 09:43:32.682872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.248 [2024-10-30 09:43:32.682932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.248 [2024-10-30 09:43:32.682951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:54.248 [2024-10-30 09:43:32.682961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.248 [2024-10-30 09:43:32.685195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.248 [2024-10-30 09:43:32.685240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.248 pt1 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 malloc2 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 [2024-10-30 09:43:32.727378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.248 [2024-10-30 09:43:32.727429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.248 [2024-10-30 09:43:32.727449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:54.248 [2024-10-30 09:43:32.727457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.248 [2024-10-30 09:43:32.729654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.248 [2024-10-30 09:43:32.729689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.248 pt2 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 malloc3 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 [2024-10-30 09:43:32.775765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:54.248 [2024-10-30 09:43:32.775927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.248 [2024-10-30 09:43:32.775956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:54.248 [2024-10-30 09:43:32.775965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.248 [2024-10-30 09:43:32.778169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.248 [2024-10-30 09:43:32.778204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:54.248 pt3 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 malloc4 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 [2024-10-30 09:43:32.812271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:54.248 [2024-10-30 09:43:32.812316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.248 [2024-10-30 09:43:32.812335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:54.248 [2024-10-30 09:43:32.812344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.248 [2024-10-30 09:43:32.814503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.248 [2024-10-30 09:43:32.814634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:54.248 pt4 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.248 [2024-10-30 09:43:32.820307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.248 [2024-10-30 09:43:32.822294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.248 [2024-10-30 09:43:32.822378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:54.248 [2024-10-30 09:43:32.822498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:54.248 [2024-10-30 09:43:32.822699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:54.248 [2024-10-30 09:43:32.822769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:54.248 [2024-10-30 09:43:32.823049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:54.248 [2024-10-30 09:43:32.823267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:54.248 [2024-10-30 09:43:32.823298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:54.248 [2024-10-30 09:43:32.823497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.248 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.249 "name": "raid_bdev1", 00:08:54.249 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:54.249 "strip_size_kb": 64, 00:08:54.249 "state": "online", 00:08:54.249 "raid_level": "raid0", 00:08:54.249 "superblock": true, 00:08:54.249 "num_base_bdevs": 4, 00:08:54.249 "num_base_bdevs_discovered": 4, 00:08:54.249 "num_base_bdevs_operational": 4, 00:08:54.249 "base_bdevs_list": [ 00:08:54.249 { 00:08:54.249 "name": "pt1", 00:08:54.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.249 "is_configured": true, 00:08:54.249 "data_offset": 2048, 00:08:54.249 "data_size": 63488 00:08:54.249 }, 00:08:54.249 { 00:08:54.249 "name": "pt2", 00:08:54.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.249 "is_configured": true, 00:08:54.249 "data_offset": 2048, 00:08:54.249 "data_size": 63488 00:08:54.249 }, 00:08:54.249 { 00:08:54.249 "name": "pt3", 00:08:54.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.249 "is_configured": true, 00:08:54.249 "data_offset": 2048, 00:08:54.249 "data_size": 63488 00:08:54.249 }, 00:08:54.249 { 00:08:54.249 "name": "pt4", 00:08:54.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:54.249 "is_configured": true, 00:08:54.249 "data_offset": 2048, 00:08:54.249 "data_size": 63488 00:08:54.249 } 00:08:54.249 ] 00:08:54.249 }' 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.249 09:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.823 [2024-10-30 09:43:33.172723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.823 "name": "raid_bdev1", 00:08:54.823 "aliases": [ 00:08:54.823 "766099b9-d335-4de1-b9e8-00fe2ffefa39" 00:08:54.823 ], 00:08:54.823 "product_name": "Raid Volume", 00:08:54.823 "block_size": 512, 00:08:54.823 "num_blocks": 253952, 00:08:54.823 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:54.823 "assigned_rate_limits": { 00:08:54.823 "rw_ios_per_sec": 0, 00:08:54.823 "rw_mbytes_per_sec": 0, 00:08:54.823 "r_mbytes_per_sec": 0, 00:08:54.823 "w_mbytes_per_sec": 0 00:08:54.823 }, 00:08:54.823 "claimed": false, 00:08:54.823 "zoned": false, 00:08:54.823 "supported_io_types": { 00:08:54.823 "read": true, 00:08:54.823 "write": true, 00:08:54.823 "unmap": true, 00:08:54.823 "flush": true, 00:08:54.823 "reset": true, 00:08:54.823 "nvme_admin": false, 00:08:54.823 "nvme_io": false, 00:08:54.823 "nvme_io_md": false, 00:08:54.823 "write_zeroes": true, 00:08:54.823 "zcopy": false, 00:08:54.823 "get_zone_info": false, 00:08:54.823 "zone_management": false, 00:08:54.823 "zone_append": false, 00:08:54.823 "compare": false, 00:08:54.823 "compare_and_write": false, 00:08:54.823 "abort": false, 00:08:54.823 "seek_hole": false, 00:08:54.823 "seek_data": false, 00:08:54.823 "copy": false, 00:08:54.823 "nvme_iov_md": false 00:08:54.823 }, 00:08:54.823 "memory_domains": [ 00:08:54.823 { 00:08:54.823 "dma_device_id": "system", 00:08:54.823 "dma_device_type": 1 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.823 "dma_device_type": 2 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "system", 00:08:54.823 "dma_device_type": 1 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.823 "dma_device_type": 2 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "system", 00:08:54.823 "dma_device_type": 1 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.823 "dma_device_type": 2 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "system", 00:08:54.823 "dma_device_type": 1 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.823 "dma_device_type": 2 00:08:54.823 } 00:08:54.823 ], 00:08:54.823 "driver_specific": { 00:08:54.823 "raid": { 00:08:54.823 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:54.823 "strip_size_kb": 64, 00:08:54.823 "state": "online", 00:08:54.823 "raid_level": "raid0", 00:08:54.823 "superblock": true, 00:08:54.823 "num_base_bdevs": 4, 00:08:54.823 "num_base_bdevs_discovered": 4, 00:08:54.823 "num_base_bdevs_operational": 4, 00:08:54.823 "base_bdevs_list": [ 00:08:54.823 { 00:08:54.823 "name": "pt1", 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": "pt2", 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": "pt3", 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 }, 00:08:54.823 { 00:08:54.823 "name": "pt4", 00:08:54.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:54.823 "is_configured": true, 00:08:54.823 "data_offset": 2048, 00:08:54.823 "data_size": 63488 00:08:54.823 } 00:08:54.823 ] 00:08:54.823 } 00:08:54.823 } 00:08:54.823 }' 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.823 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:54.823 pt2 00:08:54.823 pt3 00:08:54.823 pt4' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 [2024-10-30 09:43:33.400746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=766099b9-d335-4de1-b9e8-00fe2ffefa39 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 766099b9-d335-4de1-b9e8-00fe2ffefa39 ']' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 [2024-10-30 09:43:33.428418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.824 [2024-10-30 09:43:33.428442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.824 [2024-10-30 09:43:33.428510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.824 [2024-10-30 09:43:33.428579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.824 [2024-10-30 09:43:33.428592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.824 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:55.085 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 [2024-10-30 09:43:33.548467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:55.086 [2024-10-30 09:43:33.550456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:55.086 [2024-10-30 09:43:33.550506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:55.086 [2024-10-30 09:43:33.550541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:55.086 [2024-10-30 09:43:33.550588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:55.086 [2024-10-30 09:43:33.550637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:55.086 [2024-10-30 09:43:33.550657] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:55.086 [2024-10-30 09:43:33.550677] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:08:55.086 [2024-10-30 09:43:33.550691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.086 [2024-10-30 09:43:33.550704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:55.086 request: 00:08:55.086 { 00:08:55.086 "name": "raid_bdev1", 00:08:55.086 "raid_level": "raid0", 00:08:55.086 "base_bdevs": [ 00:08:55.086 "malloc1", 00:08:55.086 "malloc2", 00:08:55.086 "malloc3", 00:08:55.086 "malloc4" 00:08:55.086 ], 00:08:55.086 "strip_size_kb": 64, 00:08:55.086 "superblock": false, 00:08:55.086 "method": "bdev_raid_create", 00:08:55.086 "req_id": 1 00:08:55.086 } 00:08:55.086 Got JSON-RPC error response 00:08:55.086 response: 00:08:55.086 { 00:08:55.086 "code": -17, 00:08:55.086 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:55.086 } 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 [2024-10-30 09:43:33.596449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.086 [2024-10-30 09:43:33.596593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.086 [2024-10-30 09:43:33.596633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:55.086 [2024-10-30 09:43:33.596686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.086 [2024-10-30 09:43:33.598927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.086 [2024-10-30 09:43:33.599042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.086 [2024-10-30 09:43:33.599135] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.086 [2024-10-30 09:43:33.599191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.086 pt1 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.086 "name": "raid_bdev1", 00:08:55.086 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:55.086 "strip_size_kb": 64, 00:08:55.086 "state": "configuring", 00:08:55.086 "raid_level": "raid0", 00:08:55.086 "superblock": true, 00:08:55.086 "num_base_bdevs": 4, 00:08:55.086 "num_base_bdevs_discovered": 1, 00:08:55.086 "num_base_bdevs_operational": 4, 00:08:55.086 "base_bdevs_list": [ 00:08:55.086 { 00:08:55.086 "name": "pt1", 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.086 "is_configured": true, 00:08:55.086 "data_offset": 2048, 00:08:55.086 "data_size": 63488 00:08:55.086 }, 00:08:55.086 { 00:08:55.086 "name": null, 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.086 "is_configured": false, 00:08:55.086 "data_offset": 2048, 00:08:55.086 "data_size": 63488 00:08:55.086 }, 00:08:55.086 { 00:08:55.086 "name": null, 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.086 "is_configured": false, 00:08:55.086 "data_offset": 2048, 00:08:55.086 "data_size": 63488 00:08:55.086 }, 00:08:55.086 { 00:08:55.086 "name": null, 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:55.086 "is_configured": false, 00:08:55.086 "data_offset": 2048, 00:08:55.086 "data_size": 63488 00:08:55.086 } 00:08:55.086 ] 00:08:55.086 }' 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.086 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.347 [2024-10-30 09:43:33.920554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.347 [2024-10-30 09:43:33.920620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.347 [2024-10-30 09:43:33.920638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:55.347 [2024-10-30 09:43:33.920648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.347 [2024-10-30 09:43:33.921124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.347 [2024-10-30 09:43:33.921146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.347 [2024-10-30 09:43:33.921220] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.347 [2024-10-30 09:43:33.921241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.347 pt2 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.347 [2024-10-30 09:43:33.928553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.347 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.608 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.608 "name": "raid_bdev1", 00:08:55.608 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:55.608 "strip_size_kb": 64, 00:08:55.608 "state": "configuring", 00:08:55.608 "raid_level": "raid0", 00:08:55.608 "superblock": true, 00:08:55.608 "num_base_bdevs": 4, 00:08:55.608 "num_base_bdevs_discovered": 1, 00:08:55.608 "num_base_bdevs_operational": 4, 00:08:55.608 "base_bdevs_list": [ 00:08:55.608 { 00:08:55.608 "name": "pt1", 00:08:55.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.608 "is_configured": true, 00:08:55.608 "data_offset": 2048, 00:08:55.608 "data_size": 63488 00:08:55.608 }, 00:08:55.608 { 00:08:55.608 "name": null, 00:08:55.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.608 "is_configured": false, 00:08:55.608 "data_offset": 0, 00:08:55.608 "data_size": 63488 00:08:55.608 }, 00:08:55.608 { 00:08:55.608 "name": null, 00:08:55.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.608 "is_configured": false, 00:08:55.608 "data_offset": 2048, 00:08:55.608 "data_size": 63488 00:08:55.608 }, 00:08:55.608 { 00:08:55.608 "name": null, 00:08:55.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:55.608 "is_configured": false, 00:08:55.608 "data_offset": 2048, 00:08:55.608 "data_size": 63488 00:08:55.608 } 00:08:55.608 ] 00:08:55.608 }' 00:08:55.608 09:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.608 09:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 [2024-10-30 09:43:34.232624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.870 [2024-10-30 09:43:34.232681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.870 [2024-10-30 09:43:34.232700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:55.870 [2024-10-30 09:43:34.232710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.870 [2024-10-30 09:43:34.233194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.870 [2024-10-30 09:43:34.233212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.870 [2024-10-30 09:43:34.233289] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.870 [2024-10-30 09:43:34.233309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.870 pt2 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 [2024-10-30 09:43:34.244612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.870 [2024-10-30 09:43:34.244659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.870 [2024-10-30 09:43:34.244686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:55.870 [2024-10-30 09:43:34.244696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.870 [2024-10-30 09:43:34.245129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.870 [2024-10-30 09:43:34.245145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.870 [2024-10-30 09:43:34.245209] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:55.870 [2024-10-30 09:43:34.245226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.870 pt3 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 [2024-10-30 09:43:34.252590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:55.870 [2024-10-30 09:43:34.252634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.870 [2024-10-30 09:43:34.252650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:08:55.870 [2024-10-30 09:43:34.252658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.870 [2024-10-30 09:43:34.253123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.870 [2024-10-30 09:43:34.253149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:55.870 [2024-10-30 09:43:34.253208] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:08:55.870 [2024-10-30 09:43:34.253224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:55.870 [2024-10-30 09:43:34.253350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.870 [2024-10-30 09:43:34.253358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:55.870 [2024-10-30 09:43:34.253582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:55.870 [2024-10-30 09:43:34.253709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.870 [2024-10-30 09:43:34.253720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:55.870 [2024-10-30 09:43:34.253839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.870 pt4 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.870 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.870 "name": "raid_bdev1", 00:08:55.871 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:55.871 "strip_size_kb": 64, 00:08:55.871 "state": "online", 00:08:55.871 "raid_level": "raid0", 00:08:55.871 "superblock": true, 00:08:55.871 "num_base_bdevs": 4, 00:08:55.871 "num_base_bdevs_discovered": 4, 00:08:55.871 "num_base_bdevs_operational": 4, 00:08:55.871 "base_bdevs_list": [ 00:08:55.871 { 00:08:55.871 "name": "pt1", 00:08:55.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.871 "is_configured": true, 00:08:55.871 "data_offset": 2048, 00:08:55.871 "data_size": 63488 00:08:55.871 }, 00:08:55.871 { 00:08:55.871 "name": "pt2", 00:08:55.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.871 "is_configured": true, 00:08:55.871 "data_offset": 2048, 00:08:55.871 "data_size": 63488 00:08:55.871 }, 00:08:55.871 { 00:08:55.871 "name": "pt3", 00:08:55.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.871 "is_configured": true, 00:08:55.871 "data_offset": 2048, 00:08:55.871 "data_size": 63488 00:08:55.871 }, 00:08:55.871 { 00:08:55.871 "name": "pt4", 00:08:55.871 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:55.871 "is_configured": true, 00:08:55.871 "data_offset": 2048, 00:08:55.871 "data_size": 63488 00:08:55.871 } 00:08:55.871 ] 00:08:55.871 }' 00:08:55.871 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.871 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.132 [2024-10-30 09:43:34.605146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.132 "name": "raid_bdev1", 00:08:56.132 "aliases": [ 00:08:56.132 "766099b9-d335-4de1-b9e8-00fe2ffefa39" 00:08:56.132 ], 00:08:56.132 "product_name": "Raid Volume", 00:08:56.132 "block_size": 512, 00:08:56.132 "num_blocks": 253952, 00:08:56.132 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:56.132 "assigned_rate_limits": { 00:08:56.132 "rw_ios_per_sec": 0, 00:08:56.132 "rw_mbytes_per_sec": 0, 00:08:56.132 "r_mbytes_per_sec": 0, 00:08:56.132 "w_mbytes_per_sec": 0 00:08:56.132 }, 00:08:56.132 "claimed": false, 00:08:56.132 "zoned": false, 00:08:56.132 "supported_io_types": { 00:08:56.132 "read": true, 00:08:56.132 "write": true, 00:08:56.132 "unmap": true, 00:08:56.132 "flush": true, 00:08:56.132 "reset": true, 00:08:56.132 "nvme_admin": false, 00:08:56.132 "nvme_io": false, 00:08:56.132 "nvme_io_md": false, 00:08:56.132 "write_zeroes": true, 00:08:56.132 "zcopy": false, 00:08:56.132 "get_zone_info": false, 00:08:56.132 "zone_management": false, 00:08:56.132 "zone_append": false, 00:08:56.132 "compare": false, 00:08:56.132 "compare_and_write": false, 00:08:56.132 "abort": false, 00:08:56.132 "seek_hole": false, 00:08:56.132 "seek_data": false, 00:08:56.132 "copy": false, 00:08:56.132 "nvme_iov_md": false 00:08:56.132 }, 00:08:56.132 "memory_domains": [ 00:08:56.132 { 00:08:56.132 "dma_device_id": "system", 00:08:56.132 "dma_device_type": 1 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.132 "dma_device_type": 2 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "system", 00:08:56.132 "dma_device_type": 1 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.132 "dma_device_type": 2 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "system", 00:08:56.132 "dma_device_type": 1 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.132 "dma_device_type": 2 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "system", 00:08:56.132 "dma_device_type": 1 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.132 "dma_device_type": 2 00:08:56.132 } 00:08:56.132 ], 00:08:56.132 "driver_specific": { 00:08:56.132 "raid": { 00:08:56.132 "uuid": "766099b9-d335-4de1-b9e8-00fe2ffefa39", 00:08:56.132 "strip_size_kb": 64, 00:08:56.132 "state": "online", 00:08:56.132 "raid_level": "raid0", 00:08:56.132 "superblock": true, 00:08:56.132 "num_base_bdevs": 4, 00:08:56.132 "num_base_bdevs_discovered": 4, 00:08:56.132 "num_base_bdevs_operational": 4, 00:08:56.132 "base_bdevs_list": [ 00:08:56.132 { 00:08:56.132 "name": "pt1", 00:08:56.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.132 "is_configured": true, 00:08:56.132 "data_offset": 2048, 00:08:56.132 "data_size": 63488 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "name": "pt2", 00:08:56.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.132 "is_configured": true, 00:08:56.132 "data_offset": 2048, 00:08:56.132 "data_size": 63488 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "name": "pt3", 00:08:56.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.132 "is_configured": true, 00:08:56.132 "data_offset": 2048, 00:08:56.132 "data_size": 63488 00:08:56.132 }, 00:08:56.132 { 00:08:56.132 "name": "pt4", 00:08:56.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:56.132 "is_configured": true, 00:08:56.132 "data_offset": 2048, 00:08:56.132 "data_size": 63488 00:08:56.132 } 00:08:56.132 ] 00:08:56.132 } 00:08:56.132 } 00:08:56.132 }' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.132 pt2 00:08:56.132 pt3 00:08:56.132 pt4' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.132 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.394 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.395 [2024-10-30 09:43:34.833131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 766099b9-d335-4de1-b9e8-00fe2ffefa39 '!=' 766099b9-d335-4de1-b9e8-00fe2ffefa39 ']' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69049 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 69049 ']' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 69049 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69049 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69049' 00:08:56.395 killing process with pid 69049 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 69049 00:08:56.395 [2024-10-30 09:43:34.885029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.395 09:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 69049 00:08:56.395 [2024-10-30 09:43:34.885244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.395 [2024-10-30 09:43:34.885338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.395 [2024-10-30 09:43:34.885378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:56.656 [2024-10-30 09:43:35.136563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.598 09:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:57.598 00:08:57.598 real 0m4.126s 00:08:57.599 user 0m5.965s 00:08:57.599 sys 0m0.590s 00:08:57.599 09:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.599 09:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.599 ************************************ 00:08:57.599 END TEST raid_superblock_test 00:08:57.599 ************************************ 00:08:57.599 09:43:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:08:57.599 09:43:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:57.599 09:43:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.599 09:43:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.599 ************************************ 00:08:57.599 START TEST raid_read_error_test 00:08:57.599 ************************************ 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZzvY258KYk 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69297 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69297 00:08:57.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69297 ']' 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.599 09:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.599 [2024-10-30 09:43:36.002560] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:08:57.599 [2024-10-30 09:43:36.002685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69297 ] 00:08:57.599 [2024-10-30 09:43:36.154279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.860 [2024-10-30 09:43:36.259038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.860 [2024-10-30 09:43:36.397973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.860 [2024-10-30 09:43:36.398186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.536 BaseBdev1_malloc 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.536 true 00:08:58.536 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-10-30 09:43:36.902098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:58.537 [2024-10-30 09:43:36.902149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.537 [2024-10-30 09:43:36.902169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:58.537 [2024-10-30 09:43:36.902181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.537 [2024-10-30 09:43:36.904297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.537 [2024-10-30 09:43:36.904335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:58.537 BaseBdev1 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 BaseBdev2_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 true 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-10-30 09:43:36.946663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.537 [2024-10-30 09:43:36.946714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.537 [2024-10-30 09:43:36.946733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:58.537 [2024-10-30 09:43:36.946744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.537 [2024-10-30 09:43:36.948898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.537 [2024-10-30 09:43:36.948948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.537 BaseBdev2 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 BaseBdev3_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 true 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-10-30 09:43:37.008029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:58.537 [2024-10-30 09:43:37.008093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.537 [2024-10-30 09:43:37.008109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:58.537 [2024-10-30 09:43:37.008120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.537 [2024-10-30 09:43:37.010309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.537 [2024-10-30 09:43:37.010446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:58.537 BaseBdev3 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 BaseBdev4_malloc 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 true 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-10-30 09:43:37.056598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:58.537 [2024-10-30 09:43:37.056647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.537 [2024-10-30 09:43:37.056663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:58.537 [2024-10-30 09:43:37.056673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.537 [2024-10-30 09:43:37.058851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.537 [2024-10-30 09:43:37.058890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:58.537 BaseBdev4 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-10-30 09:43:37.064665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.537 [2024-10-30 09:43:37.066702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.537 [2024-10-30 09:43:37.066777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.537 [2024-10-30 09:43:37.066847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:58.537 [2024-10-30 09:43:37.067083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:58.537 [2024-10-30 09:43:37.067102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:58.537 [2024-10-30 09:43:37.067340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:58.537 [2024-10-30 09:43:37.067482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:58.537 [2024-10-30 09:43:37.067492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:58.537 [2024-10-30 09:43:37.067630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.537 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.537 "name": "raid_bdev1", 00:08:58.537 "uuid": "52fea8e2-1efd-41a0-a2c5-1183009696f6", 00:08:58.537 "strip_size_kb": 64, 00:08:58.537 "state": "online", 00:08:58.537 "raid_level": "raid0", 00:08:58.537 "superblock": true, 00:08:58.537 "num_base_bdevs": 4, 00:08:58.537 "num_base_bdevs_discovered": 4, 00:08:58.537 "num_base_bdevs_operational": 4, 00:08:58.537 "base_bdevs_list": [ 00:08:58.537 { 00:08:58.537 "name": "BaseBdev1", 00:08:58.537 "uuid": "a5ff6c20-6b20-5aee-afc8-24ff27d03ab1", 00:08:58.537 "is_configured": true, 00:08:58.537 "data_offset": 2048, 00:08:58.537 "data_size": 63488 00:08:58.537 }, 00:08:58.537 { 00:08:58.538 "name": "BaseBdev2", 00:08:58.538 "uuid": "0a924038-0ada-510c-87c7-d9cff90c69e3", 00:08:58.538 "is_configured": true, 00:08:58.538 "data_offset": 2048, 00:08:58.538 "data_size": 63488 00:08:58.538 }, 00:08:58.538 { 00:08:58.538 "name": "BaseBdev3", 00:08:58.538 "uuid": "09c5ddc2-9a39-5ecc-959d-f1b89ce2f595", 00:08:58.538 "is_configured": true, 00:08:58.538 "data_offset": 2048, 00:08:58.538 "data_size": 63488 00:08:58.538 }, 00:08:58.538 { 00:08:58.538 "name": "BaseBdev4", 00:08:58.538 "uuid": "cc62aa5b-3e1f-5929-87d5-b4c994843687", 00:08:58.538 "is_configured": true, 00:08:58.538 "data_offset": 2048, 00:08:58.538 "data_size": 63488 00:08:58.538 } 00:08:58.538 ] 00:08:58.538 }' 00:08:58.538 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.538 09:43:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.799 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.799 09:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.060 [2024-10-30 09:43:37.473806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.003 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.004 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.004 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.004 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.004 "name": "raid_bdev1", 00:09:00.004 "uuid": "52fea8e2-1efd-41a0-a2c5-1183009696f6", 00:09:00.004 "strip_size_kb": 64, 00:09:00.004 "state": "online", 00:09:00.004 "raid_level": "raid0", 00:09:00.004 "superblock": true, 00:09:00.004 "num_base_bdevs": 4, 00:09:00.004 "num_base_bdevs_discovered": 4, 00:09:00.004 "num_base_bdevs_operational": 4, 00:09:00.004 "base_bdevs_list": [ 00:09:00.004 { 00:09:00.004 "name": "BaseBdev1", 00:09:00.004 "uuid": "a5ff6c20-6b20-5aee-afc8-24ff27d03ab1", 00:09:00.004 "is_configured": true, 00:09:00.004 "data_offset": 2048, 00:09:00.004 "data_size": 63488 00:09:00.004 }, 00:09:00.004 { 00:09:00.004 "name": "BaseBdev2", 00:09:00.004 "uuid": "0a924038-0ada-510c-87c7-d9cff90c69e3", 00:09:00.004 "is_configured": true, 00:09:00.004 "data_offset": 2048, 00:09:00.004 "data_size": 63488 00:09:00.004 }, 00:09:00.004 { 00:09:00.004 "name": "BaseBdev3", 00:09:00.004 "uuid": "09c5ddc2-9a39-5ecc-959d-f1b89ce2f595", 00:09:00.004 "is_configured": true, 00:09:00.004 "data_offset": 2048, 00:09:00.004 "data_size": 63488 00:09:00.004 }, 00:09:00.004 { 00:09:00.004 "name": "BaseBdev4", 00:09:00.004 "uuid": "cc62aa5b-3e1f-5929-87d5-b4c994843687", 00:09:00.004 "is_configured": true, 00:09:00.004 "data_offset": 2048, 00:09:00.004 "data_size": 63488 00:09:00.004 } 00:09:00.004 ] 00:09:00.004 }' 00:09:00.004 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.004 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.265 [2024-10-30 09:43:38.711767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.265 [2024-10-30 09:43:38.711909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.265 [2024-10-30 09:43:38.715082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.265 [2024-10-30 09:43:38.715233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.265 [2024-10-30 09:43:38.715301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.265 [2024-10-30 09:43:38.715848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:00.265 { 00:09:00.265 "results": [ 00:09:00.265 { 00:09:00.265 "job": "raid_bdev1", 00:09:00.265 "core_mask": "0x1", 00:09:00.265 "workload": "randrw", 00:09:00.265 "percentage": 50, 00:09:00.265 "status": "finished", 00:09:00.265 "queue_depth": 1, 00:09:00.265 "io_size": 131072, 00:09:00.265 "runtime": 1.236242, 00:09:00.265 "iops": 14591.803222993556, 00:09:00.265 "mibps": 1823.9754028741945, 00:09:00.265 "io_failed": 1, 00:09:00.265 "io_timeout": 0, 00:09:00.265 "avg_latency_us": 93.87952925123658, 00:09:00.265 "min_latency_us": 33.28, 00:09:00.265 "max_latency_us": 1739.2246153846154 00:09:00.265 } 00:09:00.265 ], 00:09:00.265 "core_count": 1 00:09:00.265 } 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69297 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69297 ']' 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69297 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69297 00:09:00.265 killing process with pid 69297 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69297' 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69297 00:09:00.265 [2024-10-30 09:43:38.743146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.265 09:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69297 00:09:00.527 [2024-10-30 09:43:38.947174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZzvY258KYk 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:09:01.100 00:09:01.100 real 0m3.777s 00:09:01.100 user 0m4.446s 00:09:01.100 sys 0m0.414s 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.100 ************************************ 00:09:01.100 END TEST raid_read_error_test 00:09:01.100 ************************************ 00:09:01.100 09:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.360 09:43:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:01.360 09:43:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:01.360 09:43:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.360 09:43:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.360 ************************************ 00:09:01.360 START TEST raid_write_error_test 00:09:01.360 ************************************ 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cuPlFkI2yy 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69432 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69432 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69432 ']' 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.360 09:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.361 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.361 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.361 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.361 09:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.361 [2024-10-30 09:43:39.837806] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:01.361 [2024-10-30 09:43:39.837928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69432 ] 00:09:01.622 [2024-10-30 09:43:39.995242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.622 [2024-10-30 09:43:40.099886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.622 [2024-10-30 09:43:40.239193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.622 [2024-10-30 09:43:40.239234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.190 BaseBdev1_malloc 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.190 true 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.190 [2024-10-30 09:43:40.727831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.190 [2024-10-30 09:43:40.727887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.190 [2024-10-30 09:43:40.727907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.190 [2024-10-30 09:43:40.727917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.190 [2024-10-30 09:43:40.730195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.190 [2024-10-30 09:43:40.730233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.190 BaseBdev1 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.190 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.191 BaseBdev2_malloc 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.191 true 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.191 [2024-10-30 09:43:40.785536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.191 [2024-10-30 09:43:40.785734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.191 [2024-10-30 09:43:40.785767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.191 [2024-10-30 09:43:40.785784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.191 [2024-10-30 09:43:40.788812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.191 [2024-10-30 09:43:40.788978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.191 BaseBdev2 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.191 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 BaseBdev3_malloc 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 true 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 [2024-10-30 09:43:40.842097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:02.452 [2024-10-30 09:43:40.842144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.452 [2024-10-30 09:43:40.842160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:02.452 [2024-10-30 09:43:40.842171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.452 [2024-10-30 09:43:40.844297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.452 [2024-10-30 09:43:40.844332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:02.452 BaseBdev3 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 BaseBdev4_malloc 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 true 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 [2024-10-30 09:43:40.890537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:02.452 [2024-10-30 09:43:40.890584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.452 [2024-10-30 09:43:40.890601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:02.452 [2024-10-30 09:43:40.890611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.452 [2024-10-30 09:43:40.892699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.452 [2024-10-30 09:43:40.892739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:02.452 BaseBdev4 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.452 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 [2024-10-30 09:43:40.898600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.452 [2024-10-30 09:43:40.900436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.452 [2024-10-30 09:43:40.900509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.452 [2024-10-30 09:43:40.900576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:02.452 [2024-10-30 09:43:40.900817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:02.452 [2024-10-30 09:43:40.900836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:02.452 [2024-10-30 09:43:40.901150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:02.452 [2024-10-30 09:43:40.901316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:02.452 [2024-10-30 09:43:40.901328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:02.452 [2024-10-30 09:43:40.901468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.453 "name": "raid_bdev1", 00:09:02.453 "uuid": "66984f3c-5a93-456b-8fa8-22de546e3545", 00:09:02.453 "strip_size_kb": 64, 00:09:02.453 "state": "online", 00:09:02.453 "raid_level": "raid0", 00:09:02.453 "superblock": true, 00:09:02.453 "num_base_bdevs": 4, 00:09:02.453 "num_base_bdevs_discovered": 4, 00:09:02.453 "num_base_bdevs_operational": 4, 00:09:02.453 "base_bdevs_list": [ 00:09:02.453 { 00:09:02.453 "name": "BaseBdev1", 00:09:02.453 "uuid": "66728e07-822b-57b3-aa62-cb4c3cbda29c", 00:09:02.453 "is_configured": true, 00:09:02.453 "data_offset": 2048, 00:09:02.453 "data_size": 63488 00:09:02.453 }, 00:09:02.453 { 00:09:02.453 "name": "BaseBdev2", 00:09:02.453 "uuid": "785011e1-c98d-544a-a3ef-72e499d9f2ea", 00:09:02.453 "is_configured": true, 00:09:02.453 "data_offset": 2048, 00:09:02.453 "data_size": 63488 00:09:02.453 }, 00:09:02.453 { 00:09:02.453 "name": "BaseBdev3", 00:09:02.453 "uuid": "c2d76fc1-87eb-5a19-9fa4-528109449621", 00:09:02.453 "is_configured": true, 00:09:02.453 "data_offset": 2048, 00:09:02.453 "data_size": 63488 00:09:02.453 }, 00:09:02.453 { 00:09:02.453 "name": "BaseBdev4", 00:09:02.453 "uuid": "b68f7a19-fb12-5f6e-b078-c031aa6daee8", 00:09:02.453 "is_configured": true, 00:09:02.453 "data_offset": 2048, 00:09:02.453 "data_size": 63488 00:09:02.453 } 00:09:02.453 ] 00:09:02.453 }' 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.453 09:43:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.716 09:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.716 09:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.716 [2024-10-30 09:43:41.291615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.662 "name": "raid_bdev1", 00:09:03.662 "uuid": "66984f3c-5a93-456b-8fa8-22de546e3545", 00:09:03.662 "strip_size_kb": 64, 00:09:03.662 "state": "online", 00:09:03.662 "raid_level": "raid0", 00:09:03.662 "superblock": true, 00:09:03.662 "num_base_bdevs": 4, 00:09:03.662 "num_base_bdevs_discovered": 4, 00:09:03.662 "num_base_bdevs_operational": 4, 00:09:03.662 "base_bdevs_list": [ 00:09:03.662 { 00:09:03.662 "name": "BaseBdev1", 00:09:03.662 "uuid": "66728e07-822b-57b3-aa62-cb4c3cbda29c", 00:09:03.662 "is_configured": true, 00:09:03.662 "data_offset": 2048, 00:09:03.662 "data_size": 63488 00:09:03.662 }, 00:09:03.662 { 00:09:03.662 "name": "BaseBdev2", 00:09:03.662 "uuid": "785011e1-c98d-544a-a3ef-72e499d9f2ea", 00:09:03.662 "is_configured": true, 00:09:03.662 "data_offset": 2048, 00:09:03.662 "data_size": 63488 00:09:03.662 }, 00:09:03.662 { 00:09:03.662 "name": "BaseBdev3", 00:09:03.662 "uuid": "c2d76fc1-87eb-5a19-9fa4-528109449621", 00:09:03.662 "is_configured": true, 00:09:03.662 "data_offset": 2048, 00:09:03.662 "data_size": 63488 00:09:03.662 }, 00:09:03.662 { 00:09:03.662 "name": "BaseBdev4", 00:09:03.662 "uuid": "b68f7a19-fb12-5f6e-b078-c031aa6daee8", 00:09:03.662 "is_configured": true, 00:09:03.662 "data_offset": 2048, 00:09:03.662 "data_size": 63488 00:09:03.662 } 00:09:03.662 ] 00:09:03.662 }' 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.662 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.235 [2024-10-30 09:43:42.549594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.235 [2024-10-30 09:43:42.549624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.235 { 00:09:04.235 "results": [ 00:09:04.235 { 00:09:04.235 "job": "raid_bdev1", 00:09:04.235 "core_mask": "0x1", 00:09:04.235 "workload": "randrw", 00:09:04.235 "percentage": 50, 00:09:04.235 "status": "finished", 00:09:04.235 "queue_depth": 1, 00:09:04.235 "io_size": 131072, 00:09:04.235 "runtime": 1.255977, 00:09:04.235 "iops": 14692.148025003642, 00:09:04.235 "mibps": 1836.5185031254553, 00:09:04.235 "io_failed": 1, 00:09:04.235 "io_timeout": 0, 00:09:04.235 "avg_latency_us": 93.05005410542638, 00:09:04.235 "min_latency_us": 33.08307692307692, 00:09:04.235 "max_latency_us": 1688.8123076923077 00:09:04.235 } 00:09:04.235 ], 00:09:04.235 "core_count": 1 00:09:04.235 } 00:09:04.235 [2024-10-30 09:43:42.552652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.235 [2024-10-30 09:43:42.552711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.235 [2024-10-30 09:43:42.552753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.235 [2024-10-30 09:43:42.552764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69432 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69432 ']' 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69432 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:04.235 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69432 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:04.236 killing process with pid 69432 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69432' 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69432 00:09:04.236 [2024-10-30 09:43:42.583641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.236 09:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69432 00:09:04.236 [2024-10-30 09:43:42.790639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cuPlFkI2yy 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.235 ************************************ 00:09:05.235 END TEST raid_write_error_test 00:09:05.235 ************************************ 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:09:05.235 00:09:05.235 real 0m3.791s 00:09:05.235 user 0m4.473s 00:09:05.235 sys 0m0.398s 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.235 09:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.235 09:43:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:05.236 09:43:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:05.236 09:43:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:05.236 09:43:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.236 09:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.236 ************************************ 00:09:05.236 START TEST raid_state_function_test 00:09:05.236 ************************************ 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.236 Process raid pid: 69564 00:09:05.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69564 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69564' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69564 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69564 ']' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:05.236 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.236 [2024-10-30 09:43:43.680595] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:05.236 [2024-10-30 09:43:43.680715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.236 [2024-10-30 09:43:43.839467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.497 [2024-10-30 09:43:43.942873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.497 [2024-10-30 09:43:44.083090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.497 [2024-10-30 09:43:44.083123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.070 [2024-10-30 09:43:44.531890] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.070 [2024-10-30 09:43:44.531944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.070 [2024-10-30 09:43:44.531955] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.070 [2024-10-30 09:43:44.531964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.070 [2024-10-30 09:43:44.531970] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.070 [2024-10-30 09:43:44.531979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.070 [2024-10-30 09:43:44.531985] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:06.070 [2024-10-30 09:43:44.531993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.070 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.071 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.071 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.071 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.071 "name": "Existed_Raid", 00:09:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.071 "strip_size_kb": 64, 00:09:06.071 "state": "configuring", 00:09:06.071 "raid_level": "concat", 00:09:06.071 "superblock": false, 00:09:06.071 "num_base_bdevs": 4, 00:09:06.071 "num_base_bdevs_discovered": 0, 00:09:06.071 "num_base_bdevs_operational": 4, 00:09:06.071 "base_bdevs_list": [ 00:09:06.071 { 00:09:06.071 "name": "BaseBdev1", 00:09:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.071 "is_configured": false, 00:09:06.071 "data_offset": 0, 00:09:06.071 "data_size": 0 00:09:06.071 }, 00:09:06.071 { 00:09:06.071 "name": "BaseBdev2", 00:09:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.071 "is_configured": false, 00:09:06.071 "data_offset": 0, 00:09:06.071 "data_size": 0 00:09:06.071 }, 00:09:06.071 { 00:09:06.071 "name": "BaseBdev3", 00:09:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.071 "is_configured": false, 00:09:06.071 "data_offset": 0, 00:09:06.071 "data_size": 0 00:09:06.071 }, 00:09:06.071 { 00:09:06.071 "name": "BaseBdev4", 00:09:06.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.071 "is_configured": false, 00:09:06.071 "data_offset": 0, 00:09:06.071 "data_size": 0 00:09:06.071 } 00:09:06.071 ] 00:09:06.071 }' 00:09:06.071 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.071 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.334 [2024-10-30 09:43:44.851931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.334 [2024-10-30 09:43:44.851968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.334 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.335 [2024-10-30 09:43:44.859932] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.335 [2024-10-30 09:43:44.859972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.335 [2024-10-30 09:43:44.859981] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.335 [2024-10-30 09:43:44.859990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.335 [2024-10-30 09:43:44.859996] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.335 [2024-10-30 09:43:44.860005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.335 [2024-10-30 09:43:44.860011] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:06.335 [2024-10-30 09:43:44.860019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.335 [2024-10-30 09:43:44.892271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.335 BaseBdev1 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.335 [ 00:09:06.335 { 00:09:06.335 "name": "BaseBdev1", 00:09:06.335 "aliases": [ 00:09:06.335 "76622af6-f42c-43b6-bfc3-66620579a2a1" 00:09:06.335 ], 00:09:06.335 "product_name": "Malloc disk", 00:09:06.335 "block_size": 512, 00:09:06.335 "num_blocks": 65536, 00:09:06.335 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:06.335 "assigned_rate_limits": { 00:09:06.335 "rw_ios_per_sec": 0, 00:09:06.335 "rw_mbytes_per_sec": 0, 00:09:06.335 "r_mbytes_per_sec": 0, 00:09:06.335 "w_mbytes_per_sec": 0 00:09:06.335 }, 00:09:06.335 "claimed": true, 00:09:06.335 "claim_type": "exclusive_write", 00:09:06.335 "zoned": false, 00:09:06.335 "supported_io_types": { 00:09:06.335 "read": true, 00:09:06.335 "write": true, 00:09:06.335 "unmap": true, 00:09:06.335 "flush": true, 00:09:06.335 "reset": true, 00:09:06.335 "nvme_admin": false, 00:09:06.335 "nvme_io": false, 00:09:06.335 "nvme_io_md": false, 00:09:06.335 "write_zeroes": true, 00:09:06.335 "zcopy": true, 00:09:06.335 "get_zone_info": false, 00:09:06.335 "zone_management": false, 00:09:06.335 "zone_append": false, 00:09:06.335 "compare": false, 00:09:06.335 "compare_and_write": false, 00:09:06.335 "abort": true, 00:09:06.335 "seek_hole": false, 00:09:06.335 "seek_data": false, 00:09:06.335 "copy": true, 00:09:06.335 "nvme_iov_md": false 00:09:06.335 }, 00:09:06.335 "memory_domains": [ 00:09:06.335 { 00:09:06.335 "dma_device_id": "system", 00:09:06.335 "dma_device_type": 1 00:09:06.335 }, 00:09:06.335 { 00:09:06.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.335 "dma_device_type": 2 00:09:06.335 } 00:09:06.335 ], 00:09:06.335 "driver_specific": {} 00:09:06.335 } 00:09:06.335 ] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.335 "name": "Existed_Raid", 00:09:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.335 "strip_size_kb": 64, 00:09:06.335 "state": "configuring", 00:09:06.335 "raid_level": "concat", 00:09:06.335 "superblock": false, 00:09:06.335 "num_base_bdevs": 4, 00:09:06.335 "num_base_bdevs_discovered": 1, 00:09:06.335 "num_base_bdevs_operational": 4, 00:09:06.335 "base_bdevs_list": [ 00:09:06.335 { 00:09:06.335 "name": "BaseBdev1", 00:09:06.335 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:06.335 "is_configured": true, 00:09:06.335 "data_offset": 0, 00:09:06.335 "data_size": 65536 00:09:06.335 }, 00:09:06.335 { 00:09:06.335 "name": "BaseBdev2", 00:09:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.335 "is_configured": false, 00:09:06.335 "data_offset": 0, 00:09:06.335 "data_size": 0 00:09:06.335 }, 00:09:06.335 { 00:09:06.335 "name": "BaseBdev3", 00:09:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.335 "is_configured": false, 00:09:06.335 "data_offset": 0, 00:09:06.335 "data_size": 0 00:09:06.335 }, 00:09:06.335 { 00:09:06.335 "name": "BaseBdev4", 00:09:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.335 "is_configured": false, 00:09:06.335 "data_offset": 0, 00:09:06.335 "data_size": 0 00:09:06.335 } 00:09:06.335 ] 00:09:06.335 }' 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.335 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.908 [2024-10-30 09:43:45.248391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.908 [2024-10-30 09:43:45.248539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.908 [2024-10-30 09:43:45.256455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.908 [2024-10-30 09:43:45.258365] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.908 [2024-10-30 09:43:45.258406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.908 [2024-10-30 09:43:45.258415] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.908 [2024-10-30 09:43:45.258425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.908 [2024-10-30 09:43:45.258433] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:06.908 [2024-10-30 09:43:45.258441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.908 "name": "Existed_Raid", 00:09:06.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.908 "strip_size_kb": 64, 00:09:06.908 "state": "configuring", 00:09:06.908 "raid_level": "concat", 00:09:06.908 "superblock": false, 00:09:06.908 "num_base_bdevs": 4, 00:09:06.908 "num_base_bdevs_discovered": 1, 00:09:06.908 "num_base_bdevs_operational": 4, 00:09:06.908 "base_bdevs_list": [ 00:09:06.908 { 00:09:06.908 "name": "BaseBdev1", 00:09:06.908 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:06.908 "is_configured": true, 00:09:06.908 "data_offset": 0, 00:09:06.908 "data_size": 65536 00:09:06.908 }, 00:09:06.908 { 00:09:06.908 "name": "BaseBdev2", 00:09:06.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.908 "is_configured": false, 00:09:06.908 "data_offset": 0, 00:09:06.908 "data_size": 0 00:09:06.908 }, 00:09:06.908 { 00:09:06.908 "name": "BaseBdev3", 00:09:06.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.908 "is_configured": false, 00:09:06.908 "data_offset": 0, 00:09:06.908 "data_size": 0 00:09:06.908 }, 00:09:06.908 { 00:09:06.908 "name": "BaseBdev4", 00:09:06.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.908 "is_configured": false, 00:09:06.908 "data_offset": 0, 00:09:06.908 "data_size": 0 00:09:06.908 } 00:09:06.908 ] 00:09:06.908 }' 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.908 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.170 [2024-10-30 09:43:45.599013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.170 BaseBdev2 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.170 [ 00:09:07.170 { 00:09:07.170 "name": "BaseBdev2", 00:09:07.170 "aliases": [ 00:09:07.170 "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac" 00:09:07.170 ], 00:09:07.170 "product_name": "Malloc disk", 00:09:07.170 "block_size": 512, 00:09:07.170 "num_blocks": 65536, 00:09:07.170 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:07.170 "assigned_rate_limits": { 00:09:07.170 "rw_ios_per_sec": 0, 00:09:07.170 "rw_mbytes_per_sec": 0, 00:09:07.170 "r_mbytes_per_sec": 0, 00:09:07.170 "w_mbytes_per_sec": 0 00:09:07.170 }, 00:09:07.170 "claimed": true, 00:09:07.170 "claim_type": "exclusive_write", 00:09:07.170 "zoned": false, 00:09:07.170 "supported_io_types": { 00:09:07.170 "read": true, 00:09:07.170 "write": true, 00:09:07.170 "unmap": true, 00:09:07.170 "flush": true, 00:09:07.170 "reset": true, 00:09:07.170 "nvme_admin": false, 00:09:07.170 "nvme_io": false, 00:09:07.170 "nvme_io_md": false, 00:09:07.170 "write_zeroes": true, 00:09:07.170 "zcopy": true, 00:09:07.170 "get_zone_info": false, 00:09:07.170 "zone_management": false, 00:09:07.170 "zone_append": false, 00:09:07.170 "compare": false, 00:09:07.170 "compare_and_write": false, 00:09:07.170 "abort": true, 00:09:07.170 "seek_hole": false, 00:09:07.170 "seek_data": false, 00:09:07.170 "copy": true, 00:09:07.170 "nvme_iov_md": false 00:09:07.170 }, 00:09:07.170 "memory_domains": [ 00:09:07.170 { 00:09:07.170 "dma_device_id": "system", 00:09:07.170 "dma_device_type": 1 00:09:07.170 }, 00:09:07.170 { 00:09:07.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.170 "dma_device_type": 2 00:09:07.170 } 00:09:07.170 ], 00:09:07.170 "driver_specific": {} 00:09:07.170 } 00:09:07.170 ] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.170 "name": "Existed_Raid", 00:09:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.170 "strip_size_kb": 64, 00:09:07.170 "state": "configuring", 00:09:07.170 "raid_level": "concat", 00:09:07.170 "superblock": false, 00:09:07.170 "num_base_bdevs": 4, 00:09:07.170 "num_base_bdevs_discovered": 2, 00:09:07.170 "num_base_bdevs_operational": 4, 00:09:07.170 "base_bdevs_list": [ 00:09:07.170 { 00:09:07.170 "name": "BaseBdev1", 00:09:07.170 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:07.170 "is_configured": true, 00:09:07.170 "data_offset": 0, 00:09:07.170 "data_size": 65536 00:09:07.170 }, 00:09:07.170 { 00:09:07.170 "name": "BaseBdev2", 00:09:07.170 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:07.170 "is_configured": true, 00:09:07.170 "data_offset": 0, 00:09:07.170 "data_size": 65536 00:09:07.170 }, 00:09:07.170 { 00:09:07.170 "name": "BaseBdev3", 00:09:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.170 "is_configured": false, 00:09:07.170 "data_offset": 0, 00:09:07.170 "data_size": 0 00:09:07.170 }, 00:09:07.170 { 00:09:07.170 "name": "BaseBdev4", 00:09:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.170 "is_configured": false, 00:09:07.170 "data_offset": 0, 00:09:07.170 "data_size": 0 00:09:07.170 } 00:09:07.170 ] 00:09:07.170 }' 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.170 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.433 [2024-10-30 09:43:45.977194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.433 BaseBdev3 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.433 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.433 [ 00:09:07.433 { 00:09:07.433 "name": "BaseBdev3", 00:09:07.433 "aliases": [ 00:09:07.433 "f17ce16a-cd6a-446e-aa4c-0d385633fa41" 00:09:07.433 ], 00:09:07.433 "product_name": "Malloc disk", 00:09:07.433 "block_size": 512, 00:09:07.433 "num_blocks": 65536, 00:09:07.433 "uuid": "f17ce16a-cd6a-446e-aa4c-0d385633fa41", 00:09:07.433 "assigned_rate_limits": { 00:09:07.433 "rw_ios_per_sec": 0, 00:09:07.433 "rw_mbytes_per_sec": 0, 00:09:07.433 "r_mbytes_per_sec": 0, 00:09:07.433 "w_mbytes_per_sec": 0 00:09:07.433 }, 00:09:07.433 "claimed": true, 00:09:07.433 "claim_type": "exclusive_write", 00:09:07.433 "zoned": false, 00:09:07.433 "supported_io_types": { 00:09:07.433 "read": true, 00:09:07.433 "write": true, 00:09:07.433 "unmap": true, 00:09:07.433 "flush": true, 00:09:07.433 "reset": true, 00:09:07.433 "nvme_admin": false, 00:09:07.433 "nvme_io": false, 00:09:07.433 "nvme_io_md": false, 00:09:07.433 "write_zeroes": true, 00:09:07.433 "zcopy": true, 00:09:07.433 "get_zone_info": false, 00:09:07.433 "zone_management": false, 00:09:07.433 "zone_append": false, 00:09:07.433 "compare": false, 00:09:07.433 "compare_and_write": false, 00:09:07.433 "abort": true, 00:09:07.433 "seek_hole": false, 00:09:07.433 "seek_data": false, 00:09:07.433 "copy": true, 00:09:07.433 "nvme_iov_md": false 00:09:07.433 }, 00:09:07.433 "memory_domains": [ 00:09:07.433 { 00:09:07.433 "dma_device_id": "system", 00:09:07.433 "dma_device_type": 1 00:09:07.433 }, 00:09:07.433 { 00:09:07.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.433 "dma_device_type": 2 00:09:07.433 } 00:09:07.433 ], 00:09:07.433 "driver_specific": {} 00:09:07.433 } 00:09:07.433 ] 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.433 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.433 "name": "Existed_Raid", 00:09:07.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.433 "strip_size_kb": 64, 00:09:07.433 "state": "configuring", 00:09:07.433 "raid_level": "concat", 00:09:07.433 "superblock": false, 00:09:07.433 "num_base_bdevs": 4, 00:09:07.433 "num_base_bdevs_discovered": 3, 00:09:07.433 "num_base_bdevs_operational": 4, 00:09:07.434 "base_bdevs_list": [ 00:09:07.434 { 00:09:07.434 "name": "BaseBdev1", 00:09:07.434 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:07.434 "is_configured": true, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 65536 00:09:07.434 }, 00:09:07.434 { 00:09:07.434 "name": "BaseBdev2", 00:09:07.434 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:07.434 "is_configured": true, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 65536 00:09:07.434 }, 00:09:07.434 { 00:09:07.434 "name": "BaseBdev3", 00:09:07.434 "uuid": "f17ce16a-cd6a-446e-aa4c-0d385633fa41", 00:09:07.434 "is_configured": true, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 65536 00:09:07.434 }, 00:09:07.434 { 00:09:07.434 "name": "BaseBdev4", 00:09:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.434 "is_configured": false, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 0 00:09:07.434 } 00:09:07.434 ] 00:09:07.434 }' 00:09:07.434 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.434 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.005 [2024-10-30 09:43:46.347887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:08.005 [2024-10-30 09:43:46.347929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.005 [2024-10-30 09:43:46.347937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:08.005 [2024-10-30 09:43:46.348224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:08.005 [2024-10-30 09:43:46.348370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.005 [2024-10-30 09:43:46.348381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.005 [2024-10-30 09:43:46.348608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.005 BaseBdev4 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.005 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.005 [ 00:09:08.005 { 00:09:08.005 "name": "BaseBdev4", 00:09:08.005 "aliases": [ 00:09:08.005 "f5a57537-ef3e-433f-be78-57330917350a" 00:09:08.005 ], 00:09:08.005 "product_name": "Malloc disk", 00:09:08.005 "block_size": 512, 00:09:08.005 "num_blocks": 65536, 00:09:08.005 "uuid": "f5a57537-ef3e-433f-be78-57330917350a", 00:09:08.005 "assigned_rate_limits": { 00:09:08.005 "rw_ios_per_sec": 0, 00:09:08.005 "rw_mbytes_per_sec": 0, 00:09:08.005 "r_mbytes_per_sec": 0, 00:09:08.005 "w_mbytes_per_sec": 0 00:09:08.005 }, 00:09:08.005 "claimed": true, 00:09:08.005 "claim_type": "exclusive_write", 00:09:08.005 "zoned": false, 00:09:08.005 "supported_io_types": { 00:09:08.005 "read": true, 00:09:08.005 "write": true, 00:09:08.005 "unmap": true, 00:09:08.005 "flush": true, 00:09:08.005 "reset": true, 00:09:08.005 "nvme_admin": false, 00:09:08.005 "nvme_io": false, 00:09:08.005 "nvme_io_md": false, 00:09:08.005 "write_zeroes": true, 00:09:08.005 "zcopy": true, 00:09:08.005 "get_zone_info": false, 00:09:08.005 "zone_management": false, 00:09:08.005 "zone_append": false, 00:09:08.005 "compare": false, 00:09:08.005 "compare_and_write": false, 00:09:08.005 "abort": true, 00:09:08.005 "seek_hole": false, 00:09:08.006 "seek_data": false, 00:09:08.006 "copy": true, 00:09:08.006 "nvme_iov_md": false 00:09:08.006 }, 00:09:08.006 "memory_domains": [ 00:09:08.006 { 00:09:08.006 "dma_device_id": "system", 00:09:08.006 "dma_device_type": 1 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.006 "dma_device_type": 2 00:09:08.006 } 00:09:08.006 ], 00:09:08.006 "driver_specific": {} 00:09:08.006 } 00:09:08.006 ] 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.006 "name": "Existed_Raid", 00:09:08.006 "uuid": "cbc2ff23-2504-4bad-a4f3-1095ce476853", 00:09:08.006 "strip_size_kb": 64, 00:09:08.006 "state": "online", 00:09:08.006 "raid_level": "concat", 00:09:08.006 "superblock": false, 00:09:08.006 "num_base_bdevs": 4, 00:09:08.006 "num_base_bdevs_discovered": 4, 00:09:08.006 "num_base_bdevs_operational": 4, 00:09:08.006 "base_bdevs_list": [ 00:09:08.006 { 00:09:08.006 "name": "BaseBdev1", 00:09:08.006 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:08.006 "is_configured": true, 00:09:08.006 "data_offset": 0, 00:09:08.006 "data_size": 65536 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "name": "BaseBdev2", 00:09:08.006 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:08.006 "is_configured": true, 00:09:08.006 "data_offset": 0, 00:09:08.006 "data_size": 65536 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "name": "BaseBdev3", 00:09:08.006 "uuid": "f17ce16a-cd6a-446e-aa4c-0d385633fa41", 00:09:08.006 "is_configured": true, 00:09:08.006 "data_offset": 0, 00:09:08.006 "data_size": 65536 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "name": "BaseBdev4", 00:09:08.006 "uuid": "f5a57537-ef3e-433f-be78-57330917350a", 00:09:08.006 "is_configured": true, 00:09:08.006 "data_offset": 0, 00:09:08.006 "data_size": 65536 00:09:08.006 } 00:09:08.006 ] 00:09:08.006 }' 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.006 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.267 [2024-10-30 09:43:46.704424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.267 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.267 "name": "Existed_Raid", 00:09:08.267 "aliases": [ 00:09:08.267 "cbc2ff23-2504-4bad-a4f3-1095ce476853" 00:09:08.267 ], 00:09:08.267 "product_name": "Raid Volume", 00:09:08.267 "block_size": 512, 00:09:08.267 "num_blocks": 262144, 00:09:08.267 "uuid": "cbc2ff23-2504-4bad-a4f3-1095ce476853", 00:09:08.267 "assigned_rate_limits": { 00:09:08.267 "rw_ios_per_sec": 0, 00:09:08.267 "rw_mbytes_per_sec": 0, 00:09:08.267 "r_mbytes_per_sec": 0, 00:09:08.267 "w_mbytes_per_sec": 0 00:09:08.267 }, 00:09:08.267 "claimed": false, 00:09:08.267 "zoned": false, 00:09:08.267 "supported_io_types": { 00:09:08.267 "read": true, 00:09:08.267 "write": true, 00:09:08.267 "unmap": true, 00:09:08.267 "flush": true, 00:09:08.267 "reset": true, 00:09:08.267 "nvme_admin": false, 00:09:08.267 "nvme_io": false, 00:09:08.267 "nvme_io_md": false, 00:09:08.267 "write_zeroes": true, 00:09:08.267 "zcopy": false, 00:09:08.267 "get_zone_info": false, 00:09:08.267 "zone_management": false, 00:09:08.267 "zone_append": false, 00:09:08.267 "compare": false, 00:09:08.267 "compare_and_write": false, 00:09:08.267 "abort": false, 00:09:08.267 "seek_hole": false, 00:09:08.267 "seek_data": false, 00:09:08.267 "copy": false, 00:09:08.267 "nvme_iov_md": false 00:09:08.267 }, 00:09:08.267 "memory_domains": [ 00:09:08.267 { 00:09:08.267 "dma_device_id": "system", 00:09:08.267 "dma_device_type": 1 00:09:08.267 }, 00:09:08.267 { 00:09:08.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.267 "dma_device_type": 2 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "system", 00:09:08.268 "dma_device_type": 1 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.268 "dma_device_type": 2 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "system", 00:09:08.268 "dma_device_type": 1 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.268 "dma_device_type": 2 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "system", 00:09:08.268 "dma_device_type": 1 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.268 "dma_device_type": 2 00:09:08.268 } 00:09:08.268 ], 00:09:08.268 "driver_specific": { 00:09:08.268 "raid": { 00:09:08.268 "uuid": "cbc2ff23-2504-4bad-a4f3-1095ce476853", 00:09:08.268 "strip_size_kb": 64, 00:09:08.268 "state": "online", 00:09:08.268 "raid_level": "concat", 00:09:08.268 "superblock": false, 00:09:08.268 "num_base_bdevs": 4, 00:09:08.268 "num_base_bdevs_discovered": 4, 00:09:08.268 "num_base_bdevs_operational": 4, 00:09:08.268 "base_bdevs_list": [ 00:09:08.268 { 00:09:08.268 "name": "BaseBdev1", 00:09:08.268 "uuid": "76622af6-f42c-43b6-bfc3-66620579a2a1", 00:09:08.268 "is_configured": true, 00:09:08.268 "data_offset": 0, 00:09:08.268 "data_size": 65536 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "name": "BaseBdev2", 00:09:08.268 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:08.268 "is_configured": true, 00:09:08.268 "data_offset": 0, 00:09:08.268 "data_size": 65536 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "name": "BaseBdev3", 00:09:08.268 "uuid": "f17ce16a-cd6a-446e-aa4c-0d385633fa41", 00:09:08.268 "is_configured": true, 00:09:08.268 "data_offset": 0, 00:09:08.268 "data_size": 65536 00:09:08.268 }, 00:09:08.268 { 00:09:08.268 "name": "BaseBdev4", 00:09:08.268 "uuid": "f5a57537-ef3e-433f-be78-57330917350a", 00:09:08.268 "is_configured": true, 00:09:08.268 "data_offset": 0, 00:09:08.268 "data_size": 65536 00:09:08.268 } 00:09:08.268 ] 00:09:08.268 } 00:09:08.268 } 00:09:08.268 }' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.268 BaseBdev2 00:09:08.268 BaseBdev3 00:09:08.268 BaseBdev4' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.268 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 [2024-10-30 09:43:46.928140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.528 [2024-10-30 09:43:46.928168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.528 [2024-10-30 09:43:46.928217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.528 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.528 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.528 "name": "Existed_Raid", 00:09:08.528 "uuid": "cbc2ff23-2504-4bad-a4f3-1095ce476853", 00:09:08.528 "strip_size_kb": 64, 00:09:08.528 "state": "offline", 00:09:08.528 "raid_level": "concat", 00:09:08.528 "superblock": false, 00:09:08.528 "num_base_bdevs": 4, 00:09:08.528 "num_base_bdevs_discovered": 3, 00:09:08.528 "num_base_bdevs_operational": 3, 00:09:08.528 "base_bdevs_list": [ 00:09:08.528 { 00:09:08.528 "name": null, 00:09:08.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.528 "is_configured": false, 00:09:08.528 "data_offset": 0, 00:09:08.528 "data_size": 65536 00:09:08.528 }, 00:09:08.528 { 00:09:08.528 "name": "BaseBdev2", 00:09:08.528 "uuid": "3861ddb5-3deb-4b2d-88a7-eac8c8b3d3ac", 00:09:08.528 "is_configured": true, 00:09:08.528 "data_offset": 0, 00:09:08.528 "data_size": 65536 00:09:08.528 }, 00:09:08.528 { 00:09:08.528 "name": "BaseBdev3", 00:09:08.528 "uuid": "f17ce16a-cd6a-446e-aa4c-0d385633fa41", 00:09:08.528 "is_configured": true, 00:09:08.528 "data_offset": 0, 00:09:08.528 "data_size": 65536 00:09:08.528 }, 00:09:08.528 { 00:09:08.528 "name": "BaseBdev4", 00:09:08.528 "uuid": "f5a57537-ef3e-433f-be78-57330917350a", 00:09:08.528 "is_configured": true, 00:09:08.528 "data_offset": 0, 00:09:08.528 "data_size": 65536 00:09:08.528 } 00:09:08.528 ] 00:09:08.528 }' 00:09:08.528 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.528 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 [2024-10-30 09:43:47.337944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.786 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 [2024-10-30 09:43:47.435321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 [2024-10-30 09:43:47.532886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:09.045 [2024-10-30 09:43:47.533019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.045 BaseBdev2 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.045 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.046 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.046 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.046 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 [ 00:09:09.304 { 00:09:09.304 "name": "BaseBdev2", 00:09:09.304 "aliases": [ 00:09:09.304 "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4" 00:09:09.304 ], 00:09:09.304 "product_name": "Malloc disk", 00:09:09.304 "block_size": 512, 00:09:09.304 "num_blocks": 65536, 00:09:09.304 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:09.304 "assigned_rate_limits": { 00:09:09.304 "rw_ios_per_sec": 0, 00:09:09.304 "rw_mbytes_per_sec": 0, 00:09:09.304 "r_mbytes_per_sec": 0, 00:09:09.304 "w_mbytes_per_sec": 0 00:09:09.304 }, 00:09:09.304 "claimed": false, 00:09:09.304 "zoned": false, 00:09:09.304 "supported_io_types": { 00:09:09.304 "read": true, 00:09:09.304 "write": true, 00:09:09.304 "unmap": true, 00:09:09.304 "flush": true, 00:09:09.304 "reset": true, 00:09:09.304 "nvme_admin": false, 00:09:09.304 "nvme_io": false, 00:09:09.304 "nvme_io_md": false, 00:09:09.304 "write_zeroes": true, 00:09:09.304 "zcopy": true, 00:09:09.304 "get_zone_info": false, 00:09:09.304 "zone_management": false, 00:09:09.304 "zone_append": false, 00:09:09.304 "compare": false, 00:09:09.304 "compare_and_write": false, 00:09:09.304 "abort": true, 00:09:09.304 "seek_hole": false, 00:09:09.304 "seek_data": false, 00:09:09.304 "copy": true, 00:09:09.304 "nvme_iov_md": false 00:09:09.304 }, 00:09:09.304 "memory_domains": [ 00:09:09.304 { 00:09:09.304 "dma_device_id": "system", 00:09:09.304 "dma_device_type": 1 00:09:09.304 }, 00:09:09.304 { 00:09:09.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.304 "dma_device_type": 2 00:09:09.304 } 00:09:09.304 ], 00:09:09.304 "driver_specific": {} 00:09:09.304 } 00:09:09.304 ] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 BaseBdev3 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 [ 00:09:09.304 { 00:09:09.304 "name": "BaseBdev3", 00:09:09.304 "aliases": [ 00:09:09.304 "68e132a6-9426-4966-9a34-82374ff71aef" 00:09:09.304 ], 00:09:09.304 "product_name": "Malloc disk", 00:09:09.304 "block_size": 512, 00:09:09.304 "num_blocks": 65536, 00:09:09.304 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:09.304 "assigned_rate_limits": { 00:09:09.304 "rw_ios_per_sec": 0, 00:09:09.304 "rw_mbytes_per_sec": 0, 00:09:09.304 "r_mbytes_per_sec": 0, 00:09:09.304 "w_mbytes_per_sec": 0 00:09:09.304 }, 00:09:09.304 "claimed": false, 00:09:09.304 "zoned": false, 00:09:09.304 "supported_io_types": { 00:09:09.304 "read": true, 00:09:09.304 "write": true, 00:09:09.304 "unmap": true, 00:09:09.304 "flush": true, 00:09:09.304 "reset": true, 00:09:09.304 "nvme_admin": false, 00:09:09.304 "nvme_io": false, 00:09:09.304 "nvme_io_md": false, 00:09:09.304 "write_zeroes": true, 00:09:09.304 "zcopy": true, 00:09:09.304 "get_zone_info": false, 00:09:09.304 "zone_management": false, 00:09:09.304 "zone_append": false, 00:09:09.304 "compare": false, 00:09:09.304 "compare_and_write": false, 00:09:09.304 "abort": true, 00:09:09.304 "seek_hole": false, 00:09:09.304 "seek_data": false, 00:09:09.304 "copy": true, 00:09:09.304 "nvme_iov_md": false 00:09:09.304 }, 00:09:09.304 "memory_domains": [ 00:09:09.304 { 00:09:09.304 "dma_device_id": "system", 00:09:09.304 "dma_device_type": 1 00:09:09.304 }, 00:09:09.304 { 00:09:09.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.304 "dma_device_type": 2 00:09:09.304 } 00:09:09.304 ], 00:09:09.304 "driver_specific": {} 00:09:09.304 } 00:09:09.304 ] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 BaseBdev4 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.304 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.305 [ 00:09:09.305 { 00:09:09.305 "name": "BaseBdev4", 00:09:09.305 "aliases": [ 00:09:09.305 "9536ae25-5fc6-4c68-892a-467b44b129bb" 00:09:09.305 ], 00:09:09.305 "product_name": "Malloc disk", 00:09:09.305 "block_size": 512, 00:09:09.305 "num_blocks": 65536, 00:09:09.305 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:09.305 "assigned_rate_limits": { 00:09:09.305 "rw_ios_per_sec": 0, 00:09:09.305 "rw_mbytes_per_sec": 0, 00:09:09.305 "r_mbytes_per_sec": 0, 00:09:09.305 "w_mbytes_per_sec": 0 00:09:09.305 }, 00:09:09.305 "claimed": false, 00:09:09.305 "zoned": false, 00:09:09.305 "supported_io_types": { 00:09:09.305 "read": true, 00:09:09.305 "write": true, 00:09:09.305 "unmap": true, 00:09:09.305 "flush": true, 00:09:09.305 "reset": true, 00:09:09.305 "nvme_admin": false, 00:09:09.305 "nvme_io": false, 00:09:09.305 "nvme_io_md": false, 00:09:09.305 "write_zeroes": true, 00:09:09.305 "zcopy": true, 00:09:09.305 "get_zone_info": false, 00:09:09.305 "zone_management": false, 00:09:09.305 "zone_append": false, 00:09:09.305 "compare": false, 00:09:09.305 "compare_and_write": false, 00:09:09.305 "abort": true, 00:09:09.305 "seek_hole": false, 00:09:09.305 "seek_data": false, 00:09:09.305 "copy": true, 00:09:09.305 "nvme_iov_md": false 00:09:09.305 }, 00:09:09.305 "memory_domains": [ 00:09:09.305 { 00:09:09.305 "dma_device_id": "system", 00:09:09.305 "dma_device_type": 1 00:09:09.305 }, 00:09:09.305 { 00:09:09.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.305 "dma_device_type": 2 00:09:09.305 } 00:09:09.305 ], 00:09:09.305 "driver_specific": {} 00:09:09.305 } 00:09:09.305 ] 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.305 [2024-10-30 09:43:47.790505] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.305 [2024-10-30 09:43:47.790649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.305 [2024-10-30 09:43:47.790723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.305 [2024-10-30 09:43:47.792575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.305 [2024-10-30 09:43:47.792703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.305 "name": "Existed_Raid", 00:09:09.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.305 "strip_size_kb": 64, 00:09:09.305 "state": "configuring", 00:09:09.305 "raid_level": "concat", 00:09:09.305 "superblock": false, 00:09:09.305 "num_base_bdevs": 4, 00:09:09.305 "num_base_bdevs_discovered": 3, 00:09:09.305 "num_base_bdevs_operational": 4, 00:09:09.305 "base_bdevs_list": [ 00:09:09.305 { 00:09:09.305 "name": "BaseBdev1", 00:09:09.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.305 "is_configured": false, 00:09:09.305 "data_offset": 0, 00:09:09.305 "data_size": 0 00:09:09.305 }, 00:09:09.305 { 00:09:09.305 "name": "BaseBdev2", 00:09:09.305 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:09.305 "is_configured": true, 00:09:09.305 "data_offset": 0, 00:09:09.305 "data_size": 65536 00:09:09.305 }, 00:09:09.305 { 00:09:09.305 "name": "BaseBdev3", 00:09:09.305 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:09.305 "is_configured": true, 00:09:09.305 "data_offset": 0, 00:09:09.305 "data_size": 65536 00:09:09.305 }, 00:09:09.305 { 00:09:09.305 "name": "BaseBdev4", 00:09:09.305 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:09.305 "is_configured": true, 00:09:09.305 "data_offset": 0, 00:09:09.305 "data_size": 65536 00:09:09.305 } 00:09:09.305 ] 00:09:09.305 }' 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.305 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 [2024-10-30 09:43:48.118601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.563 "name": "Existed_Raid", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "strip_size_kb": 64, 00:09:09.563 "state": "configuring", 00:09:09.563 "raid_level": "concat", 00:09:09.563 "superblock": false, 00:09:09.563 "num_base_bdevs": 4, 00:09:09.563 "num_base_bdevs_discovered": 2, 00:09:09.563 "num_base_bdevs_operational": 4, 00:09:09.563 "base_bdevs_list": [ 00:09:09.563 { 00:09:09.563 "name": "BaseBdev1", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "is_configured": false, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 0 00:09:09.563 }, 00:09:09.563 { 00:09:09.563 "name": null, 00:09:09.563 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:09.563 "is_configured": false, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 65536 00:09:09.563 }, 00:09:09.563 { 00:09:09.563 "name": "BaseBdev3", 00:09:09.563 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:09.563 "is_configured": true, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 65536 00:09:09.563 }, 00:09:09.563 { 00:09:09.563 "name": "BaseBdev4", 00:09:09.563 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:09.563 "is_configured": true, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 65536 00:09:09.563 } 00:09:09.563 ] 00:09:09.563 }' 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.563 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.821 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.821 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.821 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.821 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.079 [2024-10-30 09:43:48.488551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.079 BaseBdev1 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.079 [ 00:09:10.079 { 00:09:10.079 "name": "BaseBdev1", 00:09:10.079 "aliases": [ 00:09:10.079 "0c635864-31bd-4e5d-80aa-3199a5be9ebd" 00:09:10.079 ], 00:09:10.079 "product_name": "Malloc disk", 00:09:10.079 "block_size": 512, 00:09:10.079 "num_blocks": 65536, 00:09:10.079 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:10.079 "assigned_rate_limits": { 00:09:10.079 "rw_ios_per_sec": 0, 00:09:10.079 "rw_mbytes_per_sec": 0, 00:09:10.079 "r_mbytes_per_sec": 0, 00:09:10.079 "w_mbytes_per_sec": 0 00:09:10.079 }, 00:09:10.079 "claimed": true, 00:09:10.079 "claim_type": "exclusive_write", 00:09:10.079 "zoned": false, 00:09:10.079 "supported_io_types": { 00:09:10.079 "read": true, 00:09:10.079 "write": true, 00:09:10.079 "unmap": true, 00:09:10.079 "flush": true, 00:09:10.079 "reset": true, 00:09:10.079 "nvme_admin": false, 00:09:10.079 "nvme_io": false, 00:09:10.079 "nvme_io_md": false, 00:09:10.079 "write_zeroes": true, 00:09:10.079 "zcopy": true, 00:09:10.079 "get_zone_info": false, 00:09:10.079 "zone_management": false, 00:09:10.079 "zone_append": false, 00:09:10.079 "compare": false, 00:09:10.079 "compare_and_write": false, 00:09:10.079 "abort": true, 00:09:10.079 "seek_hole": false, 00:09:10.079 "seek_data": false, 00:09:10.079 "copy": true, 00:09:10.079 "nvme_iov_md": false 00:09:10.079 }, 00:09:10.079 "memory_domains": [ 00:09:10.079 { 00:09:10.079 "dma_device_id": "system", 00:09:10.079 "dma_device_type": 1 00:09:10.079 }, 00:09:10.079 { 00:09:10.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.079 "dma_device_type": 2 00:09:10.079 } 00:09:10.079 ], 00:09:10.079 "driver_specific": {} 00:09:10.079 } 00:09:10.079 ] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.079 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.080 "name": "Existed_Raid", 00:09:10.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.080 "strip_size_kb": 64, 00:09:10.080 "state": "configuring", 00:09:10.080 "raid_level": "concat", 00:09:10.080 "superblock": false, 00:09:10.080 "num_base_bdevs": 4, 00:09:10.080 "num_base_bdevs_discovered": 3, 00:09:10.080 "num_base_bdevs_operational": 4, 00:09:10.080 "base_bdevs_list": [ 00:09:10.080 { 00:09:10.080 "name": "BaseBdev1", 00:09:10.080 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:10.080 "is_configured": true, 00:09:10.080 "data_offset": 0, 00:09:10.080 "data_size": 65536 00:09:10.080 }, 00:09:10.080 { 00:09:10.080 "name": null, 00:09:10.080 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:10.080 "is_configured": false, 00:09:10.080 "data_offset": 0, 00:09:10.080 "data_size": 65536 00:09:10.080 }, 00:09:10.080 { 00:09:10.080 "name": "BaseBdev3", 00:09:10.080 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:10.080 "is_configured": true, 00:09:10.080 "data_offset": 0, 00:09:10.080 "data_size": 65536 00:09:10.080 }, 00:09:10.080 { 00:09:10.080 "name": "BaseBdev4", 00:09:10.080 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:10.080 "is_configured": true, 00:09:10.080 "data_offset": 0, 00:09:10.080 "data_size": 65536 00:09:10.080 } 00:09:10.080 ] 00:09:10.080 }' 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.080 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.338 [2024-10-30 09:43:48.872700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.338 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.339 "name": "Existed_Raid", 00:09:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.339 "strip_size_kb": 64, 00:09:10.339 "state": "configuring", 00:09:10.339 "raid_level": "concat", 00:09:10.339 "superblock": false, 00:09:10.339 "num_base_bdevs": 4, 00:09:10.339 "num_base_bdevs_discovered": 2, 00:09:10.339 "num_base_bdevs_operational": 4, 00:09:10.339 "base_bdevs_list": [ 00:09:10.339 { 00:09:10.339 "name": "BaseBdev1", 00:09:10.339 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:10.339 "is_configured": true, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 65536 00:09:10.339 }, 00:09:10.339 { 00:09:10.339 "name": null, 00:09:10.339 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:10.339 "is_configured": false, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 65536 00:09:10.339 }, 00:09:10.339 { 00:09:10.339 "name": null, 00:09:10.339 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:10.339 "is_configured": false, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 65536 00:09:10.339 }, 00:09:10.339 { 00:09:10.339 "name": "BaseBdev4", 00:09:10.339 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:10.339 "is_configured": true, 00:09:10.339 "data_offset": 0, 00:09:10.339 "data_size": 65536 00:09:10.339 } 00:09:10.339 ] 00:09:10.339 }' 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.339 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.602 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.602 [2024-10-30 09:43:49.216776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.862 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.862 "name": "Existed_Raid", 00:09:10.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.862 "strip_size_kb": 64, 00:09:10.863 "state": "configuring", 00:09:10.863 "raid_level": "concat", 00:09:10.863 "superblock": false, 00:09:10.863 "num_base_bdevs": 4, 00:09:10.863 "num_base_bdevs_discovered": 3, 00:09:10.863 "num_base_bdevs_operational": 4, 00:09:10.863 "base_bdevs_list": [ 00:09:10.863 { 00:09:10.863 "name": "BaseBdev1", 00:09:10.863 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:10.863 "is_configured": true, 00:09:10.863 "data_offset": 0, 00:09:10.863 "data_size": 65536 00:09:10.863 }, 00:09:10.863 { 00:09:10.863 "name": null, 00:09:10.863 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:10.863 "is_configured": false, 00:09:10.863 "data_offset": 0, 00:09:10.863 "data_size": 65536 00:09:10.863 }, 00:09:10.863 { 00:09:10.863 "name": "BaseBdev3", 00:09:10.863 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:10.863 "is_configured": true, 00:09:10.863 "data_offset": 0, 00:09:10.863 "data_size": 65536 00:09:10.863 }, 00:09:10.863 { 00:09:10.863 "name": "BaseBdev4", 00:09:10.863 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:10.863 "is_configured": true, 00:09:10.863 "data_offset": 0, 00:09:10.863 "data_size": 65536 00:09:10.863 } 00:09:10.863 ] 00:09:10.863 }' 00:09:10.863 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.863 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.121 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.121 [2024-10-30 09:43:49.568895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.122 "name": "Existed_Raid", 00:09:11.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.122 "strip_size_kb": 64, 00:09:11.122 "state": "configuring", 00:09:11.122 "raid_level": "concat", 00:09:11.122 "superblock": false, 00:09:11.122 "num_base_bdevs": 4, 00:09:11.122 "num_base_bdevs_discovered": 2, 00:09:11.122 "num_base_bdevs_operational": 4, 00:09:11.122 "base_bdevs_list": [ 00:09:11.122 { 00:09:11.122 "name": null, 00:09:11.122 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:11.122 "is_configured": false, 00:09:11.122 "data_offset": 0, 00:09:11.122 "data_size": 65536 00:09:11.122 }, 00:09:11.122 { 00:09:11.122 "name": null, 00:09:11.122 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:11.122 "is_configured": false, 00:09:11.122 "data_offset": 0, 00:09:11.122 "data_size": 65536 00:09:11.122 }, 00:09:11.122 { 00:09:11.122 "name": "BaseBdev3", 00:09:11.122 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:11.122 "is_configured": true, 00:09:11.122 "data_offset": 0, 00:09:11.122 "data_size": 65536 00:09:11.122 }, 00:09:11.122 { 00:09:11.122 "name": "BaseBdev4", 00:09:11.122 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:11.122 "is_configured": true, 00:09:11.122 "data_offset": 0, 00:09:11.122 "data_size": 65536 00:09:11.122 } 00:09:11.122 ] 00:09:11.122 }' 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.122 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.381 [2024-10-30 09:43:49.978172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.381 09:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.639 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.639 "name": "Existed_Raid", 00:09:11.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.639 "strip_size_kb": 64, 00:09:11.639 "state": "configuring", 00:09:11.639 "raid_level": "concat", 00:09:11.639 "superblock": false, 00:09:11.639 "num_base_bdevs": 4, 00:09:11.639 "num_base_bdevs_discovered": 3, 00:09:11.639 "num_base_bdevs_operational": 4, 00:09:11.639 "base_bdevs_list": [ 00:09:11.639 { 00:09:11.639 "name": null, 00:09:11.639 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:11.639 "is_configured": false, 00:09:11.639 "data_offset": 0, 00:09:11.639 "data_size": 65536 00:09:11.639 }, 00:09:11.639 { 00:09:11.639 "name": "BaseBdev2", 00:09:11.639 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:11.639 "is_configured": true, 00:09:11.639 "data_offset": 0, 00:09:11.639 "data_size": 65536 00:09:11.639 }, 00:09:11.639 { 00:09:11.639 "name": "BaseBdev3", 00:09:11.639 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:11.639 "is_configured": true, 00:09:11.639 "data_offset": 0, 00:09:11.639 "data_size": 65536 00:09:11.639 }, 00:09:11.639 { 00:09:11.639 "name": "BaseBdev4", 00:09:11.639 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:11.639 "is_configured": true, 00:09:11.639 "data_offset": 0, 00:09:11.639 "data_size": 65536 00:09:11.639 } 00:09:11.639 ] 00:09:11.639 }' 00:09:11.639 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.639 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c635864-31bd-4e5d-80aa-3199a5be9ebd 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 [2024-10-30 09:43:50.392105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.899 [2024-10-30 09:43:50.392143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:11.899 [2024-10-30 09:43:50.392150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:11.899 [2024-10-30 09:43:50.392394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:11.899 [2024-10-30 09:43:50.392512] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:11.899 [2024-10-30 09:43:50.392522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:11.899 [2024-10-30 09:43:50.392727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.899 NewBaseBdev 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.899 [ 00:09:11.899 { 00:09:11.899 "name": "NewBaseBdev", 00:09:11.899 "aliases": [ 00:09:11.899 "0c635864-31bd-4e5d-80aa-3199a5be9ebd" 00:09:11.899 ], 00:09:11.899 "product_name": "Malloc disk", 00:09:11.899 "block_size": 512, 00:09:11.899 "num_blocks": 65536, 00:09:11.899 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:11.899 "assigned_rate_limits": { 00:09:11.899 "rw_ios_per_sec": 0, 00:09:11.899 "rw_mbytes_per_sec": 0, 00:09:11.899 "r_mbytes_per_sec": 0, 00:09:11.899 "w_mbytes_per_sec": 0 00:09:11.899 }, 00:09:11.899 "claimed": true, 00:09:11.899 "claim_type": "exclusive_write", 00:09:11.899 "zoned": false, 00:09:11.899 "supported_io_types": { 00:09:11.899 "read": true, 00:09:11.899 "write": true, 00:09:11.899 "unmap": true, 00:09:11.899 "flush": true, 00:09:11.899 "reset": true, 00:09:11.899 "nvme_admin": false, 00:09:11.899 "nvme_io": false, 00:09:11.899 "nvme_io_md": false, 00:09:11.899 "write_zeroes": true, 00:09:11.899 "zcopy": true, 00:09:11.899 "get_zone_info": false, 00:09:11.899 "zone_management": false, 00:09:11.899 "zone_append": false, 00:09:11.899 "compare": false, 00:09:11.899 "compare_and_write": false, 00:09:11.899 "abort": true, 00:09:11.899 "seek_hole": false, 00:09:11.899 "seek_data": false, 00:09:11.899 "copy": true, 00:09:11.899 "nvme_iov_md": false 00:09:11.899 }, 00:09:11.899 "memory_domains": [ 00:09:11.899 { 00:09:11.899 "dma_device_id": "system", 00:09:11.899 "dma_device_type": 1 00:09:11.899 }, 00:09:11.899 { 00:09:11.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.899 "dma_device_type": 2 00:09:11.899 } 00:09:11.899 ], 00:09:11.899 "driver_specific": {} 00:09:11.899 } 00:09:11.899 ] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.899 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.900 "name": "Existed_Raid", 00:09:11.900 "uuid": "36a236ec-bb8e-42a6-a0b3-e99ad666caee", 00:09:11.900 "strip_size_kb": 64, 00:09:11.900 "state": "online", 00:09:11.900 "raid_level": "concat", 00:09:11.900 "superblock": false, 00:09:11.900 "num_base_bdevs": 4, 00:09:11.900 "num_base_bdevs_discovered": 4, 00:09:11.900 "num_base_bdevs_operational": 4, 00:09:11.900 "base_bdevs_list": [ 00:09:11.900 { 00:09:11.900 "name": "NewBaseBdev", 00:09:11.900 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 0, 00:09:11.900 "data_size": 65536 00:09:11.900 }, 00:09:11.900 { 00:09:11.900 "name": "BaseBdev2", 00:09:11.900 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 0, 00:09:11.900 "data_size": 65536 00:09:11.900 }, 00:09:11.900 { 00:09:11.900 "name": "BaseBdev3", 00:09:11.900 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 0, 00:09:11.900 "data_size": 65536 00:09:11.900 }, 00:09:11.900 { 00:09:11.900 "name": "BaseBdev4", 00:09:11.900 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 0, 00:09:11.900 "data_size": 65536 00:09:11.900 } 00:09:11.900 ] 00:09:11.900 }' 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.900 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 [2024-10-30 09:43:50.740602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.158 "name": "Existed_Raid", 00:09:12.158 "aliases": [ 00:09:12.158 "36a236ec-bb8e-42a6-a0b3-e99ad666caee" 00:09:12.158 ], 00:09:12.158 "product_name": "Raid Volume", 00:09:12.158 "block_size": 512, 00:09:12.158 "num_blocks": 262144, 00:09:12.158 "uuid": "36a236ec-bb8e-42a6-a0b3-e99ad666caee", 00:09:12.158 "assigned_rate_limits": { 00:09:12.158 "rw_ios_per_sec": 0, 00:09:12.158 "rw_mbytes_per_sec": 0, 00:09:12.158 "r_mbytes_per_sec": 0, 00:09:12.158 "w_mbytes_per_sec": 0 00:09:12.158 }, 00:09:12.158 "claimed": false, 00:09:12.158 "zoned": false, 00:09:12.158 "supported_io_types": { 00:09:12.158 "read": true, 00:09:12.158 "write": true, 00:09:12.158 "unmap": true, 00:09:12.158 "flush": true, 00:09:12.158 "reset": true, 00:09:12.158 "nvme_admin": false, 00:09:12.158 "nvme_io": false, 00:09:12.158 "nvme_io_md": false, 00:09:12.158 "write_zeroes": true, 00:09:12.158 "zcopy": false, 00:09:12.158 "get_zone_info": false, 00:09:12.158 "zone_management": false, 00:09:12.158 "zone_append": false, 00:09:12.158 "compare": false, 00:09:12.158 "compare_and_write": false, 00:09:12.158 "abort": false, 00:09:12.158 "seek_hole": false, 00:09:12.158 "seek_data": false, 00:09:12.158 "copy": false, 00:09:12.158 "nvme_iov_md": false 00:09:12.158 }, 00:09:12.158 "memory_domains": [ 00:09:12.158 { 00:09:12.158 "dma_device_id": "system", 00:09:12.158 "dma_device_type": 1 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.158 "dma_device_type": 2 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "system", 00:09:12.158 "dma_device_type": 1 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.158 "dma_device_type": 2 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "system", 00:09:12.158 "dma_device_type": 1 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.158 "dma_device_type": 2 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "system", 00:09:12.158 "dma_device_type": 1 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.158 "dma_device_type": 2 00:09:12.158 } 00:09:12.158 ], 00:09:12.158 "driver_specific": { 00:09:12.158 "raid": { 00:09:12.158 "uuid": "36a236ec-bb8e-42a6-a0b3-e99ad666caee", 00:09:12.158 "strip_size_kb": 64, 00:09:12.158 "state": "online", 00:09:12.158 "raid_level": "concat", 00:09:12.158 "superblock": false, 00:09:12.158 "num_base_bdevs": 4, 00:09:12.158 "num_base_bdevs_discovered": 4, 00:09:12.158 "num_base_bdevs_operational": 4, 00:09:12.158 "base_bdevs_list": [ 00:09:12.158 { 00:09:12.158 "name": "NewBaseBdev", 00:09:12.158 "uuid": "0c635864-31bd-4e5d-80aa-3199a5be9ebd", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 0, 00:09:12.158 "data_size": 65536 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "name": "BaseBdev2", 00:09:12.158 "uuid": "cd48ed7e-6e2a-4cdf-a2c6-6c0546330ac4", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 0, 00:09:12.158 "data_size": 65536 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "name": "BaseBdev3", 00:09:12.158 "uuid": "68e132a6-9426-4966-9a34-82374ff71aef", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 0, 00:09:12.158 "data_size": 65536 00:09:12.159 }, 00:09:12.159 { 00:09:12.159 "name": "BaseBdev4", 00:09:12.159 "uuid": "9536ae25-5fc6-4c68-892a-467b44b129bb", 00:09:12.159 "is_configured": true, 00:09:12.159 "data_offset": 0, 00:09:12.159 "data_size": 65536 00:09:12.159 } 00:09:12.159 ] 00:09:12.159 } 00:09:12.159 } 00:09:12.159 }' 00:09:12.159 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.416 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.417 BaseBdev2 00:09:12.417 BaseBdev3 00:09:12.417 BaseBdev4' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.417 [2024-10-30 09:43:50.964275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.417 [2024-10-30 09:43:50.964302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.417 [2024-10-30 09:43:50.964365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.417 [2024-10-30 09:43:50.964429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.417 [2024-10-30 09:43:50.964439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69564 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69564 ']' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69564 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69564 00:09:12.417 killing process with pid 69564 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69564' 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69564 00:09:12.417 [2024-10-30 09:43:50.999531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.417 09:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69564 00:09:12.753 [2024-10-30 09:43:51.244440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.688 09:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.688 00:09:13.688 real 0m8.333s 00:09:13.688 user 0m13.324s 00:09:13.688 sys 0m1.300s 00:09:13.688 ************************************ 00:09:13.688 END TEST raid_state_function_test 00:09:13.688 ************************************ 00:09:13.688 09:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.688 09:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.688 09:43:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:13.688 09:43:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:13.688 09:43:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.689 09:43:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 ************************************ 00:09:13.689 START TEST raid_state_function_test_sb 00:09:13.689 ************************************ 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:13.689 Process raid pid: 70208 00:09:13.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70208 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70208' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70208 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70208 ']' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.689 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.689 [2024-10-30 09:43:52.061111] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:13.689 [2024-10-30 09:43:52.061229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.689 [2024-10-30 09:43:52.222365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.947 [2024-10-30 09:43:52.321728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.947 [2024-10-30 09:43:52.458552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.947 [2024-10-30 09:43:52.458586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.514 [2024-10-30 09:43:52.915151] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.514 [2024-10-30 09:43:52.915199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.514 [2024-10-30 09:43:52.915209] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.514 [2024-10-30 09:43:52.915219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.514 [2024-10-30 09:43:52.915226] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.514 [2024-10-30 09:43:52.915235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.514 [2024-10-30 09:43:52.915241] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.514 [2024-10-30 09:43:52.915250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.514 "name": "Existed_Raid", 00:09:14.514 "uuid": "c35cb2bd-f098-4cce-86bb-6f281b7fbd85", 00:09:14.514 "strip_size_kb": 64, 00:09:14.514 "state": "configuring", 00:09:14.514 "raid_level": "concat", 00:09:14.514 "superblock": true, 00:09:14.514 "num_base_bdevs": 4, 00:09:14.514 "num_base_bdevs_discovered": 0, 00:09:14.514 "num_base_bdevs_operational": 4, 00:09:14.514 "base_bdevs_list": [ 00:09:14.514 { 00:09:14.514 "name": "BaseBdev1", 00:09:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.514 "is_configured": false, 00:09:14.514 "data_offset": 0, 00:09:14.514 "data_size": 0 00:09:14.514 }, 00:09:14.514 { 00:09:14.514 "name": "BaseBdev2", 00:09:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.514 "is_configured": false, 00:09:14.514 "data_offset": 0, 00:09:14.514 "data_size": 0 00:09:14.514 }, 00:09:14.514 { 00:09:14.514 "name": "BaseBdev3", 00:09:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.514 "is_configured": false, 00:09:14.514 "data_offset": 0, 00:09:14.514 "data_size": 0 00:09:14.514 }, 00:09:14.514 { 00:09:14.514 "name": "BaseBdev4", 00:09:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.514 "is_configured": false, 00:09:14.514 "data_offset": 0, 00:09:14.514 "data_size": 0 00:09:14.514 } 00:09:14.514 ] 00:09:14.514 }' 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.514 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.772 [2024-10-30 09:43:53.239159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.772 [2024-10-30 09:43:53.239191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.772 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.773 [2024-10-30 09:43:53.247178] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.773 [2024-10-30 09:43:53.247213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.773 [2024-10-30 09:43:53.247221] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.773 [2024-10-30 09:43:53.247230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.773 [2024-10-30 09:43:53.247237] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.773 [2024-10-30 09:43:53.247245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.773 [2024-10-30 09:43:53.247252] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.773 [2024-10-30 09:43:53.247260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.773 [2024-10-30 09:43:53.279436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.773 BaseBdev1 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.773 [ 00:09:14.773 { 00:09:14.773 "name": "BaseBdev1", 00:09:14.773 "aliases": [ 00:09:14.773 "713c80f0-dd33-4475-834e-80c94e8ec178" 00:09:14.773 ], 00:09:14.773 "product_name": "Malloc disk", 00:09:14.773 "block_size": 512, 00:09:14.773 "num_blocks": 65536, 00:09:14.773 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:14.773 "assigned_rate_limits": { 00:09:14.773 "rw_ios_per_sec": 0, 00:09:14.773 "rw_mbytes_per_sec": 0, 00:09:14.773 "r_mbytes_per_sec": 0, 00:09:14.773 "w_mbytes_per_sec": 0 00:09:14.773 }, 00:09:14.773 "claimed": true, 00:09:14.773 "claim_type": "exclusive_write", 00:09:14.773 "zoned": false, 00:09:14.773 "supported_io_types": { 00:09:14.773 "read": true, 00:09:14.773 "write": true, 00:09:14.773 "unmap": true, 00:09:14.773 "flush": true, 00:09:14.773 "reset": true, 00:09:14.773 "nvme_admin": false, 00:09:14.773 "nvme_io": false, 00:09:14.773 "nvme_io_md": false, 00:09:14.773 "write_zeroes": true, 00:09:14.773 "zcopy": true, 00:09:14.773 "get_zone_info": false, 00:09:14.773 "zone_management": false, 00:09:14.773 "zone_append": false, 00:09:14.773 "compare": false, 00:09:14.773 "compare_and_write": false, 00:09:14.773 "abort": true, 00:09:14.773 "seek_hole": false, 00:09:14.773 "seek_data": false, 00:09:14.773 "copy": true, 00:09:14.773 "nvme_iov_md": false 00:09:14.773 }, 00:09:14.773 "memory_domains": [ 00:09:14.773 { 00:09:14.773 "dma_device_id": "system", 00:09:14.773 "dma_device_type": 1 00:09:14.773 }, 00:09:14.773 { 00:09:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.773 "dma_device_type": 2 00:09:14.773 } 00:09:14.773 ], 00:09:14.773 "driver_specific": {} 00:09:14.773 } 00:09:14.773 ] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.773 "name": "Existed_Raid", 00:09:14.773 "uuid": "e5a98459-5ce5-4b69-9ff4-4591ae10884b", 00:09:14.773 "strip_size_kb": 64, 00:09:14.773 "state": "configuring", 00:09:14.773 "raid_level": "concat", 00:09:14.773 "superblock": true, 00:09:14.773 "num_base_bdevs": 4, 00:09:14.773 "num_base_bdevs_discovered": 1, 00:09:14.773 "num_base_bdevs_operational": 4, 00:09:14.773 "base_bdevs_list": [ 00:09:14.773 { 00:09:14.773 "name": "BaseBdev1", 00:09:14.773 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:14.773 "is_configured": true, 00:09:14.773 "data_offset": 2048, 00:09:14.773 "data_size": 63488 00:09:14.773 }, 00:09:14.773 { 00:09:14.773 "name": "BaseBdev2", 00:09:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.773 "is_configured": false, 00:09:14.773 "data_offset": 0, 00:09:14.773 "data_size": 0 00:09:14.773 }, 00:09:14.773 { 00:09:14.773 "name": "BaseBdev3", 00:09:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.773 "is_configured": false, 00:09:14.773 "data_offset": 0, 00:09:14.773 "data_size": 0 00:09:14.773 }, 00:09:14.773 { 00:09:14.773 "name": "BaseBdev4", 00:09:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.773 "is_configured": false, 00:09:14.773 "data_offset": 0, 00:09:14.773 "data_size": 0 00:09:14.773 } 00:09:14.773 ] 00:09:14.773 }' 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.773 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.031 [2024-10-30 09:43:53.627560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.031 [2024-10-30 09:43:53.627604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.031 [2024-10-30 09:43:53.635621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.031 [2024-10-30 09:43:53.637457] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.031 [2024-10-30 09:43:53.637604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.031 [2024-10-30 09:43:53.637620] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.031 [2024-10-30 09:43:53.637632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.031 [2024-10-30 09:43:53.637640] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:15.031 [2024-10-30 09:43:53.637648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.031 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.289 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.289 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.289 "name": "Existed_Raid", 00:09:15.289 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:15.289 "strip_size_kb": 64, 00:09:15.289 "state": "configuring", 00:09:15.289 "raid_level": "concat", 00:09:15.289 "superblock": true, 00:09:15.289 "num_base_bdevs": 4, 00:09:15.289 "num_base_bdevs_discovered": 1, 00:09:15.289 "num_base_bdevs_operational": 4, 00:09:15.289 "base_bdevs_list": [ 00:09:15.289 { 00:09:15.289 "name": "BaseBdev1", 00:09:15.289 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:15.289 "is_configured": true, 00:09:15.289 "data_offset": 2048, 00:09:15.289 "data_size": 63488 00:09:15.289 }, 00:09:15.289 { 00:09:15.289 "name": "BaseBdev2", 00:09:15.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.289 "is_configured": false, 00:09:15.289 "data_offset": 0, 00:09:15.289 "data_size": 0 00:09:15.289 }, 00:09:15.289 { 00:09:15.289 "name": "BaseBdev3", 00:09:15.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.289 "is_configured": false, 00:09:15.289 "data_offset": 0, 00:09:15.289 "data_size": 0 00:09:15.289 }, 00:09:15.289 { 00:09:15.289 "name": "BaseBdev4", 00:09:15.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.289 "is_configured": false, 00:09:15.289 "data_offset": 0, 00:09:15.289 "data_size": 0 00:09:15.289 } 00:09:15.289 ] 00:09:15.289 }' 00:09:15.289 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.289 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 [2024-10-30 09:43:53.993931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.548 BaseBdev2 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.548 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 [ 00:09:15.548 { 00:09:15.548 "name": "BaseBdev2", 00:09:15.548 "aliases": [ 00:09:15.548 "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd" 00:09:15.548 ], 00:09:15.548 "product_name": "Malloc disk", 00:09:15.548 "block_size": 512, 00:09:15.548 "num_blocks": 65536, 00:09:15.548 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:15.548 "assigned_rate_limits": { 00:09:15.548 "rw_ios_per_sec": 0, 00:09:15.548 "rw_mbytes_per_sec": 0, 00:09:15.548 "r_mbytes_per_sec": 0, 00:09:15.548 "w_mbytes_per_sec": 0 00:09:15.548 }, 00:09:15.548 "claimed": true, 00:09:15.548 "claim_type": "exclusive_write", 00:09:15.548 "zoned": false, 00:09:15.548 "supported_io_types": { 00:09:15.548 "read": true, 00:09:15.548 "write": true, 00:09:15.548 "unmap": true, 00:09:15.548 "flush": true, 00:09:15.548 "reset": true, 00:09:15.548 "nvme_admin": false, 00:09:15.548 "nvme_io": false, 00:09:15.548 "nvme_io_md": false, 00:09:15.548 "write_zeroes": true, 00:09:15.548 "zcopy": true, 00:09:15.548 "get_zone_info": false, 00:09:15.548 "zone_management": false, 00:09:15.548 "zone_append": false, 00:09:15.548 "compare": false, 00:09:15.548 "compare_and_write": false, 00:09:15.548 "abort": true, 00:09:15.548 "seek_hole": false, 00:09:15.548 "seek_data": false, 00:09:15.548 "copy": true, 00:09:15.548 "nvme_iov_md": false 00:09:15.548 }, 00:09:15.548 "memory_domains": [ 00:09:15.548 { 00:09:15.548 "dma_device_id": "system", 00:09:15.548 "dma_device_type": 1 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.548 "dma_device_type": 2 00:09:15.548 } 00:09:15.548 ], 00:09:15.548 "driver_specific": {} 00:09:15.548 } 00:09:15.548 ] 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.548 "name": "Existed_Raid", 00:09:15.548 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:15.548 "strip_size_kb": 64, 00:09:15.548 "state": "configuring", 00:09:15.548 "raid_level": "concat", 00:09:15.548 "superblock": true, 00:09:15.548 "num_base_bdevs": 4, 00:09:15.548 "num_base_bdevs_discovered": 2, 00:09:15.548 "num_base_bdevs_operational": 4, 00:09:15.548 "base_bdevs_list": [ 00:09:15.548 { 00:09:15.548 "name": "BaseBdev1", 00:09:15.548 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:15.548 "is_configured": true, 00:09:15.548 "data_offset": 2048, 00:09:15.548 "data_size": 63488 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "name": "BaseBdev2", 00:09:15.548 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:15.548 "is_configured": true, 00:09:15.548 "data_offset": 2048, 00:09:15.548 "data_size": 63488 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "name": "BaseBdev3", 00:09:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.548 "is_configured": false, 00:09:15.548 "data_offset": 0, 00:09:15.548 "data_size": 0 00:09:15.548 }, 00:09:15.548 { 00:09:15.548 "name": "BaseBdev4", 00:09:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.548 "is_configured": false, 00:09:15.548 "data_offset": 0, 00:09:15.548 "data_size": 0 00:09:15.548 } 00:09:15.548 ] 00:09:15.548 }' 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.548 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 [2024-10-30 09:43:54.361255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.806 BaseBdev3 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.806 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 [ 00:09:15.806 { 00:09:15.806 "name": "BaseBdev3", 00:09:15.806 "aliases": [ 00:09:15.806 "41704357-2d54-4863-8afd-32b310757400" 00:09:15.806 ], 00:09:15.806 "product_name": "Malloc disk", 00:09:15.806 "block_size": 512, 00:09:15.806 "num_blocks": 65536, 00:09:15.806 "uuid": "41704357-2d54-4863-8afd-32b310757400", 00:09:15.806 "assigned_rate_limits": { 00:09:15.806 "rw_ios_per_sec": 0, 00:09:15.806 "rw_mbytes_per_sec": 0, 00:09:15.806 "r_mbytes_per_sec": 0, 00:09:15.806 "w_mbytes_per_sec": 0 00:09:15.806 }, 00:09:15.806 "claimed": true, 00:09:15.806 "claim_type": "exclusive_write", 00:09:15.806 "zoned": false, 00:09:15.806 "supported_io_types": { 00:09:15.806 "read": true, 00:09:15.806 "write": true, 00:09:15.806 "unmap": true, 00:09:15.806 "flush": true, 00:09:15.806 "reset": true, 00:09:15.806 "nvme_admin": false, 00:09:15.806 "nvme_io": false, 00:09:15.806 "nvme_io_md": false, 00:09:15.806 "write_zeroes": true, 00:09:15.806 "zcopy": true, 00:09:15.806 "get_zone_info": false, 00:09:15.806 "zone_management": false, 00:09:15.806 "zone_append": false, 00:09:15.806 "compare": false, 00:09:15.806 "compare_and_write": false, 00:09:15.806 "abort": true, 00:09:15.806 "seek_hole": false, 00:09:15.806 "seek_data": false, 00:09:15.806 "copy": true, 00:09:15.806 "nvme_iov_md": false 00:09:15.806 }, 00:09:15.806 "memory_domains": [ 00:09:15.807 { 00:09:15.807 "dma_device_id": "system", 00:09:15.807 "dma_device_type": 1 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.807 "dma_device_type": 2 00:09:15.807 } 00:09:15.807 ], 00:09:15.807 "driver_specific": {} 00:09:15.807 } 00:09:15.807 ] 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.807 "name": "Existed_Raid", 00:09:15.807 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:15.807 "strip_size_kb": 64, 00:09:15.807 "state": "configuring", 00:09:15.807 "raid_level": "concat", 00:09:15.807 "superblock": true, 00:09:15.807 "num_base_bdevs": 4, 00:09:15.807 "num_base_bdevs_discovered": 3, 00:09:15.807 "num_base_bdevs_operational": 4, 00:09:15.807 "base_bdevs_list": [ 00:09:15.807 { 00:09:15.807 "name": "BaseBdev1", 00:09:15.807 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:15.807 "is_configured": true, 00:09:15.807 "data_offset": 2048, 00:09:15.807 "data_size": 63488 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "name": "BaseBdev2", 00:09:15.807 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:15.807 "is_configured": true, 00:09:15.807 "data_offset": 2048, 00:09:15.807 "data_size": 63488 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "name": "BaseBdev3", 00:09:15.807 "uuid": "41704357-2d54-4863-8afd-32b310757400", 00:09:15.807 "is_configured": true, 00:09:15.807 "data_offset": 2048, 00:09:15.807 "data_size": 63488 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "name": "BaseBdev4", 00:09:15.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.807 "is_configured": false, 00:09:15.807 "data_offset": 0, 00:09:15.807 "data_size": 0 00:09:15.807 } 00:09:15.807 ] 00:09:15.807 }' 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.807 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.372 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:16.372 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.372 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.372 [2024-10-30 09:43:54.735712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:16.373 BaseBdev4 00:09:16.373 [2024-10-30 09:43:54.736098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.373 [2024-10-30 09:43:54.736117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:16.373 [2024-10-30 09:43:54.736380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:16.373 [2024-10-30 09:43:54.736515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.373 [2024-10-30 09:43:54.736527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:16.373 [2024-10-30 09:43:54.736647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 [ 00:09:16.373 { 00:09:16.373 "name": "BaseBdev4", 00:09:16.373 "aliases": [ 00:09:16.373 "08be20a5-9e31-4d9e-b8f3-f74943b34ea0" 00:09:16.373 ], 00:09:16.373 "product_name": "Malloc disk", 00:09:16.373 "block_size": 512, 00:09:16.373 "num_blocks": 65536, 00:09:16.373 "uuid": "08be20a5-9e31-4d9e-b8f3-f74943b34ea0", 00:09:16.373 "assigned_rate_limits": { 00:09:16.373 "rw_ios_per_sec": 0, 00:09:16.373 "rw_mbytes_per_sec": 0, 00:09:16.373 "r_mbytes_per_sec": 0, 00:09:16.373 "w_mbytes_per_sec": 0 00:09:16.373 }, 00:09:16.373 "claimed": true, 00:09:16.373 "claim_type": "exclusive_write", 00:09:16.373 "zoned": false, 00:09:16.373 "supported_io_types": { 00:09:16.373 "read": true, 00:09:16.373 "write": true, 00:09:16.373 "unmap": true, 00:09:16.373 "flush": true, 00:09:16.373 "reset": true, 00:09:16.373 "nvme_admin": false, 00:09:16.373 "nvme_io": false, 00:09:16.373 "nvme_io_md": false, 00:09:16.373 "write_zeroes": true, 00:09:16.373 "zcopy": true, 00:09:16.373 "get_zone_info": false, 00:09:16.373 "zone_management": false, 00:09:16.373 "zone_append": false, 00:09:16.373 "compare": false, 00:09:16.373 "compare_and_write": false, 00:09:16.373 "abort": true, 00:09:16.373 "seek_hole": false, 00:09:16.373 "seek_data": false, 00:09:16.373 "copy": true, 00:09:16.373 "nvme_iov_md": false 00:09:16.373 }, 00:09:16.373 "memory_domains": [ 00:09:16.373 { 00:09:16.373 "dma_device_id": "system", 00:09:16.373 "dma_device_type": 1 00:09:16.373 }, 00:09:16.373 { 00:09:16.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.373 "dma_device_type": 2 00:09:16.373 } 00:09:16.373 ], 00:09:16.373 "driver_specific": {} 00:09:16.373 } 00:09:16.373 ] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.373 "name": "Existed_Raid", 00:09:16.373 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:16.373 "strip_size_kb": 64, 00:09:16.373 "state": "online", 00:09:16.373 "raid_level": "concat", 00:09:16.373 "superblock": true, 00:09:16.373 "num_base_bdevs": 4, 00:09:16.373 "num_base_bdevs_discovered": 4, 00:09:16.373 "num_base_bdevs_operational": 4, 00:09:16.373 "base_bdevs_list": [ 00:09:16.373 { 00:09:16.373 "name": "BaseBdev1", 00:09:16.373 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:16.373 "is_configured": true, 00:09:16.373 "data_offset": 2048, 00:09:16.373 "data_size": 63488 00:09:16.373 }, 00:09:16.373 { 00:09:16.373 "name": "BaseBdev2", 00:09:16.373 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:16.373 "is_configured": true, 00:09:16.373 "data_offset": 2048, 00:09:16.373 "data_size": 63488 00:09:16.373 }, 00:09:16.373 { 00:09:16.373 "name": "BaseBdev3", 00:09:16.373 "uuid": "41704357-2d54-4863-8afd-32b310757400", 00:09:16.373 "is_configured": true, 00:09:16.373 "data_offset": 2048, 00:09:16.373 "data_size": 63488 00:09:16.373 }, 00:09:16.373 { 00:09:16.373 "name": "BaseBdev4", 00:09:16.373 "uuid": "08be20a5-9e31-4d9e-b8f3-f74943b34ea0", 00:09:16.373 "is_configured": true, 00:09:16.373 "data_offset": 2048, 00:09:16.373 "data_size": 63488 00:09:16.373 } 00:09:16.373 ] 00:09:16.373 }' 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.373 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.632 [2024-10-30 09:43:55.084215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.632 "name": "Existed_Raid", 00:09:16.632 "aliases": [ 00:09:16.632 "b362de25-9a14-4bef-9b22-d06374b26bae" 00:09:16.632 ], 00:09:16.632 "product_name": "Raid Volume", 00:09:16.632 "block_size": 512, 00:09:16.632 "num_blocks": 253952, 00:09:16.632 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:16.632 "assigned_rate_limits": { 00:09:16.632 "rw_ios_per_sec": 0, 00:09:16.632 "rw_mbytes_per_sec": 0, 00:09:16.632 "r_mbytes_per_sec": 0, 00:09:16.632 "w_mbytes_per_sec": 0 00:09:16.632 }, 00:09:16.632 "claimed": false, 00:09:16.632 "zoned": false, 00:09:16.632 "supported_io_types": { 00:09:16.632 "read": true, 00:09:16.632 "write": true, 00:09:16.632 "unmap": true, 00:09:16.632 "flush": true, 00:09:16.632 "reset": true, 00:09:16.632 "nvme_admin": false, 00:09:16.632 "nvme_io": false, 00:09:16.632 "nvme_io_md": false, 00:09:16.632 "write_zeroes": true, 00:09:16.632 "zcopy": false, 00:09:16.632 "get_zone_info": false, 00:09:16.632 "zone_management": false, 00:09:16.632 "zone_append": false, 00:09:16.632 "compare": false, 00:09:16.632 "compare_and_write": false, 00:09:16.632 "abort": false, 00:09:16.632 "seek_hole": false, 00:09:16.632 "seek_data": false, 00:09:16.632 "copy": false, 00:09:16.632 "nvme_iov_md": false 00:09:16.632 }, 00:09:16.632 "memory_domains": [ 00:09:16.632 { 00:09:16.632 "dma_device_id": "system", 00:09:16.632 "dma_device_type": 1 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.632 "dma_device_type": 2 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "system", 00:09:16.632 "dma_device_type": 1 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.632 "dma_device_type": 2 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "system", 00:09:16.632 "dma_device_type": 1 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.632 "dma_device_type": 2 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "system", 00:09:16.632 "dma_device_type": 1 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.632 "dma_device_type": 2 00:09:16.632 } 00:09:16.632 ], 00:09:16.632 "driver_specific": { 00:09:16.632 "raid": { 00:09:16.632 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:16.632 "strip_size_kb": 64, 00:09:16.632 "state": "online", 00:09:16.632 "raid_level": "concat", 00:09:16.632 "superblock": true, 00:09:16.632 "num_base_bdevs": 4, 00:09:16.632 "num_base_bdevs_discovered": 4, 00:09:16.632 "num_base_bdevs_operational": 4, 00:09:16.632 "base_bdevs_list": [ 00:09:16.632 { 00:09:16.632 "name": "BaseBdev1", 00:09:16.632 "uuid": "713c80f0-dd33-4475-834e-80c94e8ec178", 00:09:16.632 "is_configured": true, 00:09:16.632 "data_offset": 2048, 00:09:16.632 "data_size": 63488 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "name": "BaseBdev2", 00:09:16.632 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:16.632 "is_configured": true, 00:09:16.632 "data_offset": 2048, 00:09:16.632 "data_size": 63488 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "name": "BaseBdev3", 00:09:16.632 "uuid": "41704357-2d54-4863-8afd-32b310757400", 00:09:16.632 "is_configured": true, 00:09:16.632 "data_offset": 2048, 00:09:16.632 "data_size": 63488 00:09:16.632 }, 00:09:16.632 { 00:09:16.632 "name": "BaseBdev4", 00:09:16.632 "uuid": "08be20a5-9e31-4d9e-b8f3-f74943b34ea0", 00:09:16.632 "is_configured": true, 00:09:16.632 "data_offset": 2048, 00:09:16.632 "data_size": 63488 00:09:16.632 } 00:09:16.632 ] 00:09:16.632 } 00:09:16.632 } 00:09:16.632 }' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.632 BaseBdev2 00:09:16.632 BaseBdev3 00:09:16.632 BaseBdev4' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.632 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 [2024-10-30 09:43:55.303938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.891 [2024-10-30 09:43:55.304072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.891 [2024-10-30 09:43:55.304130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.891 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.891 "name": "Existed_Raid", 00:09:16.891 "uuid": "b362de25-9a14-4bef-9b22-d06374b26bae", 00:09:16.891 "strip_size_kb": 64, 00:09:16.891 "state": "offline", 00:09:16.891 "raid_level": "concat", 00:09:16.891 "superblock": true, 00:09:16.891 "num_base_bdevs": 4, 00:09:16.891 "num_base_bdevs_discovered": 3, 00:09:16.891 "num_base_bdevs_operational": 3, 00:09:16.891 "base_bdevs_list": [ 00:09:16.891 { 00:09:16.891 "name": null, 00:09:16.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.891 "is_configured": false, 00:09:16.891 "data_offset": 0, 00:09:16.891 "data_size": 63488 00:09:16.891 }, 00:09:16.891 { 00:09:16.891 "name": "BaseBdev2", 00:09:16.891 "uuid": "ce61d9c5-8ad0-46bd-8a47-9411e087f3bd", 00:09:16.891 "is_configured": true, 00:09:16.891 "data_offset": 2048, 00:09:16.891 "data_size": 63488 00:09:16.891 }, 00:09:16.891 { 00:09:16.891 "name": "BaseBdev3", 00:09:16.891 "uuid": "41704357-2d54-4863-8afd-32b310757400", 00:09:16.891 "is_configured": true, 00:09:16.891 "data_offset": 2048, 00:09:16.891 "data_size": 63488 00:09:16.891 }, 00:09:16.891 { 00:09:16.891 "name": "BaseBdev4", 00:09:16.891 "uuid": "08be20a5-9e31-4d9e-b8f3-f74943b34ea0", 00:09:16.891 "is_configured": true, 00:09:16.892 "data_offset": 2048, 00:09:16.892 "data_size": 63488 00:09:16.892 } 00:09:16.892 ] 00:09:16.892 }' 00:09:16.892 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.892 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.149 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.149 [2024-10-30 09:43:55.725334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.407 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.407 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.407 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 [2024-10-30 09:43:55.823654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 [2024-10-30 09:43:55.913255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:17.408 [2024-10-30 09:43:55.913296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.408 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 BaseBdev2 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 [ 00:09:17.667 { 00:09:17.667 "name": "BaseBdev2", 00:09:17.667 "aliases": [ 00:09:17.667 "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba" 00:09:17.667 ], 00:09:17.667 "product_name": "Malloc disk", 00:09:17.667 "block_size": 512, 00:09:17.667 "num_blocks": 65536, 00:09:17.667 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:17.667 "assigned_rate_limits": { 00:09:17.667 "rw_ios_per_sec": 0, 00:09:17.667 "rw_mbytes_per_sec": 0, 00:09:17.667 "r_mbytes_per_sec": 0, 00:09:17.667 "w_mbytes_per_sec": 0 00:09:17.667 }, 00:09:17.667 "claimed": false, 00:09:17.667 "zoned": false, 00:09:17.667 "supported_io_types": { 00:09:17.667 "read": true, 00:09:17.667 "write": true, 00:09:17.667 "unmap": true, 00:09:17.667 "flush": true, 00:09:17.667 "reset": true, 00:09:17.667 "nvme_admin": false, 00:09:17.667 "nvme_io": false, 00:09:17.667 "nvme_io_md": false, 00:09:17.667 "write_zeroes": true, 00:09:17.667 "zcopy": true, 00:09:17.667 "get_zone_info": false, 00:09:17.667 "zone_management": false, 00:09:17.667 "zone_append": false, 00:09:17.667 "compare": false, 00:09:17.667 "compare_and_write": false, 00:09:17.667 "abort": true, 00:09:17.667 "seek_hole": false, 00:09:17.667 "seek_data": false, 00:09:17.667 "copy": true, 00:09:17.667 "nvme_iov_md": false 00:09:17.667 }, 00:09:17.667 "memory_domains": [ 00:09:17.667 { 00:09:17.667 "dma_device_id": "system", 00:09:17.667 "dma_device_type": 1 00:09:17.667 }, 00:09:17.667 { 00:09:17.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.667 "dma_device_type": 2 00:09:17.667 } 00:09:17.667 ], 00:09:17.667 "driver_specific": {} 00:09:17.667 } 00:09:17.667 ] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 BaseBdev3 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.667 [ 00:09:17.667 { 00:09:17.667 "name": "BaseBdev3", 00:09:17.667 "aliases": [ 00:09:17.667 "3208450a-2c8b-4c4a-b23a-ca7eab368774" 00:09:17.667 ], 00:09:17.667 "product_name": "Malloc disk", 00:09:17.667 "block_size": 512, 00:09:17.667 "num_blocks": 65536, 00:09:17.667 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:17.667 "assigned_rate_limits": { 00:09:17.667 "rw_ios_per_sec": 0, 00:09:17.667 "rw_mbytes_per_sec": 0, 00:09:17.667 "r_mbytes_per_sec": 0, 00:09:17.667 "w_mbytes_per_sec": 0 00:09:17.667 }, 00:09:17.667 "claimed": false, 00:09:17.667 "zoned": false, 00:09:17.667 "supported_io_types": { 00:09:17.667 "read": true, 00:09:17.667 "write": true, 00:09:17.667 "unmap": true, 00:09:17.667 "flush": true, 00:09:17.667 "reset": true, 00:09:17.667 "nvme_admin": false, 00:09:17.667 "nvme_io": false, 00:09:17.667 "nvme_io_md": false, 00:09:17.667 "write_zeroes": true, 00:09:17.667 "zcopy": true, 00:09:17.667 "get_zone_info": false, 00:09:17.667 "zone_management": false, 00:09:17.667 "zone_append": false, 00:09:17.667 "compare": false, 00:09:17.667 "compare_and_write": false, 00:09:17.667 "abort": true, 00:09:17.667 "seek_hole": false, 00:09:17.667 "seek_data": false, 00:09:17.667 "copy": true, 00:09:17.667 "nvme_iov_md": false 00:09:17.667 }, 00:09:17.667 "memory_domains": [ 00:09:17.667 { 00:09:17.667 "dma_device_id": "system", 00:09:17.667 "dma_device_type": 1 00:09:17.667 }, 00:09:17.667 { 00:09:17.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.667 "dma_device_type": 2 00:09:17.667 } 00:09:17.667 ], 00:09:17.667 "driver_specific": {} 00:09:17.667 } 00:09:17.667 ] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.667 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.668 BaseBdev4 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.668 [ 00:09:17.668 { 00:09:17.668 "name": "BaseBdev4", 00:09:17.668 "aliases": [ 00:09:17.668 "4ea3fdc6-8180-419e-8170-ba237b857f6f" 00:09:17.668 ], 00:09:17.668 "product_name": "Malloc disk", 00:09:17.668 "block_size": 512, 00:09:17.668 "num_blocks": 65536, 00:09:17.668 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:17.668 "assigned_rate_limits": { 00:09:17.668 "rw_ios_per_sec": 0, 00:09:17.668 "rw_mbytes_per_sec": 0, 00:09:17.668 "r_mbytes_per_sec": 0, 00:09:17.668 "w_mbytes_per_sec": 0 00:09:17.668 }, 00:09:17.668 "claimed": false, 00:09:17.668 "zoned": false, 00:09:17.668 "supported_io_types": { 00:09:17.668 "read": true, 00:09:17.668 "write": true, 00:09:17.668 "unmap": true, 00:09:17.668 "flush": true, 00:09:17.668 "reset": true, 00:09:17.668 "nvme_admin": false, 00:09:17.668 "nvme_io": false, 00:09:17.668 "nvme_io_md": false, 00:09:17.668 "write_zeroes": true, 00:09:17.668 "zcopy": true, 00:09:17.668 "get_zone_info": false, 00:09:17.668 "zone_management": false, 00:09:17.668 "zone_append": false, 00:09:17.668 "compare": false, 00:09:17.668 "compare_and_write": false, 00:09:17.668 "abort": true, 00:09:17.668 "seek_hole": false, 00:09:17.668 "seek_data": false, 00:09:17.668 "copy": true, 00:09:17.668 "nvme_iov_md": false 00:09:17.668 }, 00:09:17.668 "memory_domains": [ 00:09:17.668 { 00:09:17.668 "dma_device_id": "system", 00:09:17.668 "dma_device_type": 1 00:09:17.668 }, 00:09:17.668 { 00:09:17.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.668 "dma_device_type": 2 00:09:17.668 } 00:09:17.668 ], 00:09:17.668 "driver_specific": {} 00:09:17.668 } 00:09:17.668 ] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.668 [2024-10-30 09:43:56.169490] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.668 [2024-10-30 09:43:56.169638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.668 [2024-10-30 09:43:56.169720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.668 [2024-10-30 09:43:56.171541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.668 [2024-10-30 09:43:56.171678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.668 "name": "Existed_Raid", 00:09:17.668 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:17.668 "strip_size_kb": 64, 00:09:17.668 "state": "configuring", 00:09:17.668 "raid_level": "concat", 00:09:17.668 "superblock": true, 00:09:17.668 "num_base_bdevs": 4, 00:09:17.668 "num_base_bdevs_discovered": 3, 00:09:17.668 "num_base_bdevs_operational": 4, 00:09:17.668 "base_bdevs_list": [ 00:09:17.668 { 00:09:17.668 "name": "BaseBdev1", 00:09:17.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.668 "is_configured": false, 00:09:17.668 "data_offset": 0, 00:09:17.668 "data_size": 0 00:09:17.668 }, 00:09:17.668 { 00:09:17.668 "name": "BaseBdev2", 00:09:17.668 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:17.668 "is_configured": true, 00:09:17.668 "data_offset": 2048, 00:09:17.668 "data_size": 63488 00:09:17.668 }, 00:09:17.668 { 00:09:17.668 "name": "BaseBdev3", 00:09:17.668 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:17.668 "is_configured": true, 00:09:17.668 "data_offset": 2048, 00:09:17.668 "data_size": 63488 00:09:17.668 }, 00:09:17.668 { 00:09:17.668 "name": "BaseBdev4", 00:09:17.668 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:17.668 "is_configured": true, 00:09:17.668 "data_offset": 2048, 00:09:17.668 "data_size": 63488 00:09:17.668 } 00:09:17.668 ] 00:09:17.668 }' 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.668 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.927 [2024-10-30 09:43:56.481557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.927 "name": "Existed_Raid", 00:09:17.927 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:17.927 "strip_size_kb": 64, 00:09:17.927 "state": "configuring", 00:09:17.927 "raid_level": "concat", 00:09:17.927 "superblock": true, 00:09:17.927 "num_base_bdevs": 4, 00:09:17.927 "num_base_bdevs_discovered": 2, 00:09:17.927 "num_base_bdevs_operational": 4, 00:09:17.927 "base_bdevs_list": [ 00:09:17.927 { 00:09:17.927 "name": "BaseBdev1", 00:09:17.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.927 "is_configured": false, 00:09:17.927 "data_offset": 0, 00:09:17.927 "data_size": 0 00:09:17.927 }, 00:09:17.927 { 00:09:17.927 "name": null, 00:09:17.927 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:17.927 "is_configured": false, 00:09:17.927 "data_offset": 0, 00:09:17.927 "data_size": 63488 00:09:17.927 }, 00:09:17.927 { 00:09:17.927 "name": "BaseBdev3", 00:09:17.927 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:17.927 "is_configured": true, 00:09:17.927 "data_offset": 2048, 00:09:17.927 "data_size": 63488 00:09:17.927 }, 00:09:17.927 { 00:09:17.927 "name": "BaseBdev4", 00:09:17.927 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:17.927 "is_configured": true, 00:09:17.927 "data_offset": 2048, 00:09:17.927 "data_size": 63488 00:09:17.927 } 00:09:17.927 ] 00:09:17.927 }' 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.927 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.184 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.184 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.184 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.184 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.184 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.442 [2024-10-30 09:43:56.835566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.442 BaseBdev1 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.442 [ 00:09:18.442 { 00:09:18.442 "name": "BaseBdev1", 00:09:18.442 "aliases": [ 00:09:18.442 "0006c825-0154-4b55-9bcc-6bba79dbadca" 00:09:18.442 ], 00:09:18.442 "product_name": "Malloc disk", 00:09:18.442 "block_size": 512, 00:09:18.442 "num_blocks": 65536, 00:09:18.442 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:18.442 "assigned_rate_limits": { 00:09:18.442 "rw_ios_per_sec": 0, 00:09:18.442 "rw_mbytes_per_sec": 0, 00:09:18.442 "r_mbytes_per_sec": 0, 00:09:18.442 "w_mbytes_per_sec": 0 00:09:18.442 }, 00:09:18.442 "claimed": true, 00:09:18.442 "claim_type": "exclusive_write", 00:09:18.442 "zoned": false, 00:09:18.442 "supported_io_types": { 00:09:18.442 "read": true, 00:09:18.442 "write": true, 00:09:18.442 "unmap": true, 00:09:18.442 "flush": true, 00:09:18.442 "reset": true, 00:09:18.442 "nvme_admin": false, 00:09:18.442 "nvme_io": false, 00:09:18.442 "nvme_io_md": false, 00:09:18.442 "write_zeroes": true, 00:09:18.442 "zcopy": true, 00:09:18.442 "get_zone_info": false, 00:09:18.442 "zone_management": false, 00:09:18.442 "zone_append": false, 00:09:18.442 "compare": false, 00:09:18.442 "compare_and_write": false, 00:09:18.442 "abort": true, 00:09:18.442 "seek_hole": false, 00:09:18.442 "seek_data": false, 00:09:18.442 "copy": true, 00:09:18.442 "nvme_iov_md": false 00:09:18.442 }, 00:09:18.442 "memory_domains": [ 00:09:18.442 { 00:09:18.442 "dma_device_id": "system", 00:09:18.442 "dma_device_type": 1 00:09:18.442 }, 00:09:18.442 { 00:09:18.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.442 "dma_device_type": 2 00:09:18.442 } 00:09:18.442 ], 00:09:18.442 "driver_specific": {} 00:09:18.442 } 00:09:18.442 ] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.442 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.442 "name": "Existed_Raid", 00:09:18.442 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:18.442 "strip_size_kb": 64, 00:09:18.442 "state": "configuring", 00:09:18.442 "raid_level": "concat", 00:09:18.442 "superblock": true, 00:09:18.442 "num_base_bdevs": 4, 00:09:18.442 "num_base_bdevs_discovered": 3, 00:09:18.442 "num_base_bdevs_operational": 4, 00:09:18.442 "base_bdevs_list": [ 00:09:18.442 { 00:09:18.442 "name": "BaseBdev1", 00:09:18.442 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:18.442 "is_configured": true, 00:09:18.442 "data_offset": 2048, 00:09:18.442 "data_size": 63488 00:09:18.442 }, 00:09:18.442 { 00:09:18.442 "name": null, 00:09:18.443 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:18.443 "is_configured": false, 00:09:18.443 "data_offset": 0, 00:09:18.443 "data_size": 63488 00:09:18.443 }, 00:09:18.443 { 00:09:18.443 "name": "BaseBdev3", 00:09:18.443 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:18.443 "is_configured": true, 00:09:18.443 "data_offset": 2048, 00:09:18.443 "data_size": 63488 00:09:18.443 }, 00:09:18.443 { 00:09:18.443 "name": "BaseBdev4", 00:09:18.443 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:18.443 "is_configured": true, 00:09:18.443 "data_offset": 2048, 00:09:18.443 "data_size": 63488 00:09:18.443 } 00:09:18.443 ] 00:09:18.443 }' 00:09:18.443 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.443 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 [2024-10-30 09:43:57.227715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.700 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.700 "name": "Existed_Raid", 00:09:18.700 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:18.700 "strip_size_kb": 64, 00:09:18.700 "state": "configuring", 00:09:18.700 "raid_level": "concat", 00:09:18.700 "superblock": true, 00:09:18.700 "num_base_bdevs": 4, 00:09:18.700 "num_base_bdevs_discovered": 2, 00:09:18.700 "num_base_bdevs_operational": 4, 00:09:18.700 "base_bdevs_list": [ 00:09:18.700 { 00:09:18.700 "name": "BaseBdev1", 00:09:18.700 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:18.700 "is_configured": true, 00:09:18.700 "data_offset": 2048, 00:09:18.700 "data_size": 63488 00:09:18.700 }, 00:09:18.700 { 00:09:18.700 "name": null, 00:09:18.700 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:18.700 "is_configured": false, 00:09:18.700 "data_offset": 0, 00:09:18.700 "data_size": 63488 00:09:18.700 }, 00:09:18.700 { 00:09:18.700 "name": null, 00:09:18.700 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:18.700 "is_configured": false, 00:09:18.700 "data_offset": 0, 00:09:18.700 "data_size": 63488 00:09:18.700 }, 00:09:18.700 { 00:09:18.700 "name": "BaseBdev4", 00:09:18.700 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:18.701 "is_configured": true, 00:09:18.701 "data_offset": 2048, 00:09:18.701 "data_size": 63488 00:09:18.701 } 00:09:18.701 ] 00:09:18.701 }' 00:09:18.701 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.701 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.958 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 [2024-10-30 09:43:57.579804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.216 "name": "Existed_Raid", 00:09:19.216 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:19.216 "strip_size_kb": 64, 00:09:19.216 "state": "configuring", 00:09:19.216 "raid_level": "concat", 00:09:19.216 "superblock": true, 00:09:19.216 "num_base_bdevs": 4, 00:09:19.216 "num_base_bdevs_discovered": 3, 00:09:19.216 "num_base_bdevs_operational": 4, 00:09:19.216 "base_bdevs_list": [ 00:09:19.216 { 00:09:19.216 "name": "BaseBdev1", 00:09:19.216 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:19.216 "is_configured": true, 00:09:19.216 "data_offset": 2048, 00:09:19.216 "data_size": 63488 00:09:19.216 }, 00:09:19.216 { 00:09:19.216 "name": null, 00:09:19.216 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:19.216 "is_configured": false, 00:09:19.216 "data_offset": 0, 00:09:19.216 "data_size": 63488 00:09:19.216 }, 00:09:19.216 { 00:09:19.216 "name": "BaseBdev3", 00:09:19.216 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:19.216 "is_configured": true, 00:09:19.216 "data_offset": 2048, 00:09:19.216 "data_size": 63488 00:09:19.216 }, 00:09:19.216 { 00:09:19.216 "name": "BaseBdev4", 00:09:19.216 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:19.216 "is_configured": true, 00:09:19.216 "data_offset": 2048, 00:09:19.216 "data_size": 63488 00:09:19.216 } 00:09:19.216 ] 00:09:19.216 }' 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.216 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.474 [2024-10-30 09:43:57.935883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.474 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.474 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.474 "name": "Existed_Raid", 00:09:19.474 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:19.474 "strip_size_kb": 64, 00:09:19.474 "state": "configuring", 00:09:19.474 "raid_level": "concat", 00:09:19.474 "superblock": true, 00:09:19.474 "num_base_bdevs": 4, 00:09:19.474 "num_base_bdevs_discovered": 2, 00:09:19.474 "num_base_bdevs_operational": 4, 00:09:19.474 "base_bdevs_list": [ 00:09:19.474 { 00:09:19.474 "name": null, 00:09:19.474 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:19.474 "is_configured": false, 00:09:19.474 "data_offset": 0, 00:09:19.474 "data_size": 63488 00:09:19.474 }, 00:09:19.474 { 00:09:19.474 "name": null, 00:09:19.474 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:19.474 "is_configured": false, 00:09:19.474 "data_offset": 0, 00:09:19.474 "data_size": 63488 00:09:19.474 }, 00:09:19.474 { 00:09:19.474 "name": "BaseBdev3", 00:09:19.474 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:19.474 "is_configured": true, 00:09:19.474 "data_offset": 2048, 00:09:19.474 "data_size": 63488 00:09:19.474 }, 00:09:19.474 { 00:09:19.474 "name": "BaseBdev4", 00:09:19.474 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:19.474 "is_configured": true, 00:09:19.474 "data_offset": 2048, 00:09:19.474 "data_size": 63488 00:09:19.474 } 00:09:19.474 ] 00:09:19.474 }' 00:09:19.474 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.474 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 [2024-10-30 09:43:58.317955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.731 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.732 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.732 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.732 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.989 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.989 "name": "Existed_Raid", 00:09:19.989 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:19.989 "strip_size_kb": 64, 00:09:19.989 "state": "configuring", 00:09:19.989 "raid_level": "concat", 00:09:19.989 "superblock": true, 00:09:19.989 "num_base_bdevs": 4, 00:09:19.989 "num_base_bdevs_discovered": 3, 00:09:19.989 "num_base_bdevs_operational": 4, 00:09:19.989 "base_bdevs_list": [ 00:09:19.989 { 00:09:19.989 "name": null, 00:09:19.989 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:19.989 "is_configured": false, 00:09:19.989 "data_offset": 0, 00:09:19.989 "data_size": 63488 00:09:19.989 }, 00:09:19.989 { 00:09:19.989 "name": "BaseBdev2", 00:09:19.989 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:19.989 "is_configured": true, 00:09:19.989 "data_offset": 2048, 00:09:19.989 "data_size": 63488 00:09:19.989 }, 00:09:19.989 { 00:09:19.989 "name": "BaseBdev3", 00:09:19.989 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:19.989 "is_configured": true, 00:09:19.989 "data_offset": 2048, 00:09:19.989 "data_size": 63488 00:09:19.989 }, 00:09:19.989 { 00:09:19.989 "name": "BaseBdev4", 00:09:19.989 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:19.989 "is_configured": true, 00:09:19.989 "data_offset": 2048, 00:09:19.989 "data_size": 63488 00:09:19.989 } 00:09:19.989 ] 00:09:19.989 }' 00:09:19.989 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.989 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.253 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0006c825-0154-4b55-9bcc-6bba79dbadca 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 [2024-10-30 09:43:58.703987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.254 [2024-10-30 09:43:58.704175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:20.254 [2024-10-30 09:43:58.704186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:20.254 NewBaseBdev 00:09:20.254 [2024-10-30 09:43:58.704406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:20.254 [2024-10-30 09:43:58.704513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:20.254 [2024-10-30 09:43:58.704525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:20.254 [2024-10-30 09:43:58.704623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 [ 00:09:20.254 { 00:09:20.254 "name": "NewBaseBdev", 00:09:20.254 "aliases": [ 00:09:20.254 "0006c825-0154-4b55-9bcc-6bba79dbadca" 00:09:20.254 ], 00:09:20.254 "product_name": "Malloc disk", 00:09:20.254 "block_size": 512, 00:09:20.254 "num_blocks": 65536, 00:09:20.254 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:20.254 "assigned_rate_limits": { 00:09:20.254 "rw_ios_per_sec": 0, 00:09:20.254 "rw_mbytes_per_sec": 0, 00:09:20.254 "r_mbytes_per_sec": 0, 00:09:20.254 "w_mbytes_per_sec": 0 00:09:20.254 }, 00:09:20.254 "claimed": true, 00:09:20.254 "claim_type": "exclusive_write", 00:09:20.254 "zoned": false, 00:09:20.254 "supported_io_types": { 00:09:20.254 "read": true, 00:09:20.254 "write": true, 00:09:20.254 "unmap": true, 00:09:20.254 "flush": true, 00:09:20.254 "reset": true, 00:09:20.254 "nvme_admin": false, 00:09:20.254 "nvme_io": false, 00:09:20.254 "nvme_io_md": false, 00:09:20.254 "write_zeroes": true, 00:09:20.254 "zcopy": true, 00:09:20.254 "get_zone_info": false, 00:09:20.254 "zone_management": false, 00:09:20.254 "zone_append": false, 00:09:20.254 "compare": false, 00:09:20.254 "compare_and_write": false, 00:09:20.254 "abort": true, 00:09:20.254 "seek_hole": false, 00:09:20.254 "seek_data": false, 00:09:20.254 "copy": true, 00:09:20.254 "nvme_iov_md": false 00:09:20.254 }, 00:09:20.254 "memory_domains": [ 00:09:20.254 { 00:09:20.254 "dma_device_id": "system", 00:09:20.254 "dma_device_type": 1 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.254 "dma_device_type": 2 00:09:20.254 } 00:09:20.254 ], 00:09:20.254 "driver_specific": {} 00:09:20.254 } 00:09:20.254 ] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.254 "name": "Existed_Raid", 00:09:20.254 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:20.254 "strip_size_kb": 64, 00:09:20.254 "state": "online", 00:09:20.254 "raid_level": "concat", 00:09:20.254 "superblock": true, 00:09:20.254 "num_base_bdevs": 4, 00:09:20.254 "num_base_bdevs_discovered": 4, 00:09:20.254 "num_base_bdevs_operational": 4, 00:09:20.254 "base_bdevs_list": [ 00:09:20.254 { 00:09:20.254 "name": "NewBaseBdev", 00:09:20.254 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "name": "BaseBdev2", 00:09:20.254 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "name": "BaseBdev3", 00:09:20.254 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "name": "BaseBdev4", 00:09:20.254 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 } 00:09:20.254 ] 00:09:20.254 }' 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.254 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.522 [2024-10-30 09:43:59.044412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.522 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.522 "name": "Existed_Raid", 00:09:20.522 "aliases": [ 00:09:20.522 "753f96dc-8b5a-4e23-8558-0639aaeb567a" 00:09:20.522 ], 00:09:20.522 "product_name": "Raid Volume", 00:09:20.522 "block_size": 512, 00:09:20.522 "num_blocks": 253952, 00:09:20.522 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:20.522 "assigned_rate_limits": { 00:09:20.522 "rw_ios_per_sec": 0, 00:09:20.522 "rw_mbytes_per_sec": 0, 00:09:20.522 "r_mbytes_per_sec": 0, 00:09:20.522 "w_mbytes_per_sec": 0 00:09:20.522 }, 00:09:20.522 "claimed": false, 00:09:20.522 "zoned": false, 00:09:20.522 "supported_io_types": { 00:09:20.522 "read": true, 00:09:20.522 "write": true, 00:09:20.522 "unmap": true, 00:09:20.522 "flush": true, 00:09:20.522 "reset": true, 00:09:20.522 "nvme_admin": false, 00:09:20.522 "nvme_io": false, 00:09:20.522 "nvme_io_md": false, 00:09:20.522 "write_zeroes": true, 00:09:20.522 "zcopy": false, 00:09:20.522 "get_zone_info": false, 00:09:20.522 "zone_management": false, 00:09:20.522 "zone_append": false, 00:09:20.522 "compare": false, 00:09:20.522 "compare_and_write": false, 00:09:20.522 "abort": false, 00:09:20.522 "seek_hole": false, 00:09:20.522 "seek_data": false, 00:09:20.522 "copy": false, 00:09:20.522 "nvme_iov_md": false 00:09:20.522 }, 00:09:20.522 "memory_domains": [ 00:09:20.522 { 00:09:20.522 "dma_device_id": "system", 00:09:20.522 "dma_device_type": 1 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.522 "dma_device_type": 2 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "system", 00:09:20.522 "dma_device_type": 1 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.522 "dma_device_type": 2 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "system", 00:09:20.522 "dma_device_type": 1 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.522 "dma_device_type": 2 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "system", 00:09:20.522 "dma_device_type": 1 00:09:20.522 }, 00:09:20.522 { 00:09:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.522 "dma_device_type": 2 00:09:20.522 } 00:09:20.522 ], 00:09:20.522 "driver_specific": { 00:09:20.522 "raid": { 00:09:20.522 "uuid": "753f96dc-8b5a-4e23-8558-0639aaeb567a", 00:09:20.523 "strip_size_kb": 64, 00:09:20.523 "state": "online", 00:09:20.523 "raid_level": "concat", 00:09:20.523 "superblock": true, 00:09:20.523 "num_base_bdevs": 4, 00:09:20.523 "num_base_bdevs_discovered": 4, 00:09:20.523 "num_base_bdevs_operational": 4, 00:09:20.523 "base_bdevs_list": [ 00:09:20.523 { 00:09:20.523 "name": "NewBaseBdev", 00:09:20.523 "uuid": "0006c825-0154-4b55-9bcc-6bba79dbadca", 00:09:20.523 "is_configured": true, 00:09:20.523 "data_offset": 2048, 00:09:20.523 "data_size": 63488 00:09:20.523 }, 00:09:20.523 { 00:09:20.523 "name": "BaseBdev2", 00:09:20.523 "uuid": "5911b3dd-3e73-42b8-ba4c-79e87e9d8eba", 00:09:20.523 "is_configured": true, 00:09:20.523 "data_offset": 2048, 00:09:20.523 "data_size": 63488 00:09:20.523 }, 00:09:20.523 { 00:09:20.523 "name": "BaseBdev3", 00:09:20.523 "uuid": "3208450a-2c8b-4c4a-b23a-ca7eab368774", 00:09:20.523 "is_configured": true, 00:09:20.523 "data_offset": 2048, 00:09:20.523 "data_size": 63488 00:09:20.523 }, 00:09:20.523 { 00:09:20.523 "name": "BaseBdev4", 00:09:20.523 "uuid": "4ea3fdc6-8180-419e-8170-ba237b857f6f", 00:09:20.523 "is_configured": true, 00:09:20.523 "data_offset": 2048, 00:09:20.523 "data_size": 63488 00:09:20.523 } 00:09:20.523 ] 00:09:20.523 } 00:09:20.523 } 00:09:20.523 }' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:20.523 BaseBdev2 00:09:20.523 BaseBdev3 00:09:20.523 BaseBdev4' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.523 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 [2024-10-30 09:43:59.260141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.781 [2024-10-30 09:43:59.260251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.781 [2024-10-30 09:43:59.260315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.781 [2024-10-30 09:43:59.260372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.781 [2024-10-30 09:43:59.260381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70208 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70208 ']' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70208 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70208 00:09:20.781 killing process with pid 70208 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70208' 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70208 00:09:20.781 [2024-10-30 09:43:59.290053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.781 09:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70208 00:09:21.038 [2024-10-30 09:43:59.478980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.601 ************************************ 00:09:21.601 END TEST raid_state_function_test_sb 00:09:21.601 ************************************ 00:09:21.601 09:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:21.601 00:09:21.601 real 0m8.042s 00:09:21.601 user 0m12.972s 00:09:21.601 sys 0m1.332s 00:09:21.601 09:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.601 09:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.601 09:44:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:21.601 09:44:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:21.601 09:44:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.601 09:44:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.601 ************************************ 00:09:21.601 START TEST raid_superblock_test 00:09:21.601 ************************************ 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70839 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70839 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70839 ']' 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.601 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.601 [2024-10-30 09:44:00.130490] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:21.601 [2024-10-30 09:44:00.130608] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70839 ] 00:09:21.858 [2024-10-30 09:44:00.288809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.858 [2024-10-30 09:44:00.384877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.113 [2024-10-30 09:44:00.519221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.113 [2024-10-30 09:44:00.519263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.370 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 malloc1 00:09:22.627 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.627 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.627 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.627 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 [2024-10-30 09:44:01.001883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.627 [2024-10-30 09:44:01.001942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.627 [2024-10-30 09:44:01.001961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:22.627 [2024-10-30 09:44:01.001971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.627 [2024-10-30 09:44:01.004068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.627 [2024-10-30 09:44:01.004099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.627 pt1 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 malloc2 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 [2024-10-30 09:44:01.041385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.627 [2024-10-30 09:44:01.041430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.627 [2024-10-30 09:44:01.041451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:22.627 [2024-10-30 09:44:01.041459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.627 [2024-10-30 09:44:01.043501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.627 [2024-10-30 09:44:01.043642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.627 pt2 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 malloc3 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.627 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.627 [2024-10-30 09:44:01.090685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.627 [2024-10-30 09:44:01.090732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.628 [2024-10-30 09:44:01.090752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:22.628 [2024-10-30 09:44:01.090761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.628 [2024-10-30 09:44:01.092822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.628 pt3 00:09:22.628 [2024-10-30 09:44:01.092958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.628 malloc4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.628 [2024-10-30 09:44:01.126255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:22.628 [2024-10-30 09:44:01.126296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.628 [2024-10-30 09:44:01.126311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:22.628 [2024-10-30 09:44:01.126319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.628 [2024-10-30 09:44:01.128372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.628 [2024-10-30 09:44:01.128403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:22.628 pt4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.628 [2024-10-30 09:44:01.134293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.628 [2024-10-30 09:44:01.136107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.628 [2024-10-30 09:44:01.136169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.628 [2024-10-30 09:44:01.136229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:22.628 [2024-10-30 09:44:01.136407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:22.628 [2024-10-30 09:44:01.136417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:22.628 [2024-10-30 09:44:01.136685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:22.628 [2024-10-30 09:44:01.136841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:22.628 [2024-10-30 09:44:01.136852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:22.628 [2024-10-30 09:44:01.136989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.628 "name": "raid_bdev1", 00:09:22.628 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:22.628 "strip_size_kb": 64, 00:09:22.628 "state": "online", 00:09:22.628 "raid_level": "concat", 00:09:22.628 "superblock": true, 00:09:22.628 "num_base_bdevs": 4, 00:09:22.628 "num_base_bdevs_discovered": 4, 00:09:22.628 "num_base_bdevs_operational": 4, 00:09:22.628 "base_bdevs_list": [ 00:09:22.628 { 00:09:22.628 "name": "pt1", 00:09:22.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.628 "is_configured": true, 00:09:22.628 "data_offset": 2048, 00:09:22.628 "data_size": 63488 00:09:22.628 }, 00:09:22.628 { 00:09:22.628 "name": "pt2", 00:09:22.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.628 "is_configured": true, 00:09:22.628 "data_offset": 2048, 00:09:22.628 "data_size": 63488 00:09:22.628 }, 00:09:22.628 { 00:09:22.628 "name": "pt3", 00:09:22.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.628 "is_configured": true, 00:09:22.628 "data_offset": 2048, 00:09:22.628 "data_size": 63488 00:09:22.628 }, 00:09:22.628 { 00:09:22.628 "name": "pt4", 00:09:22.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:22.628 "is_configured": true, 00:09:22.628 "data_offset": 2048, 00:09:22.628 "data_size": 63488 00:09:22.628 } 00:09:22.628 ] 00:09:22.628 }' 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.628 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.886 [2024-10-30 09:44:01.454690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.886 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.886 "name": "raid_bdev1", 00:09:22.886 "aliases": [ 00:09:22.886 "b443cb91-84f7-45a6-a000-cea202c35c8a" 00:09:22.886 ], 00:09:22.886 "product_name": "Raid Volume", 00:09:22.886 "block_size": 512, 00:09:22.886 "num_blocks": 253952, 00:09:22.886 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:22.886 "assigned_rate_limits": { 00:09:22.886 "rw_ios_per_sec": 0, 00:09:22.886 "rw_mbytes_per_sec": 0, 00:09:22.886 "r_mbytes_per_sec": 0, 00:09:22.886 "w_mbytes_per_sec": 0 00:09:22.886 }, 00:09:22.886 "claimed": false, 00:09:22.886 "zoned": false, 00:09:22.886 "supported_io_types": { 00:09:22.886 "read": true, 00:09:22.886 "write": true, 00:09:22.886 "unmap": true, 00:09:22.886 "flush": true, 00:09:22.886 "reset": true, 00:09:22.886 "nvme_admin": false, 00:09:22.886 "nvme_io": false, 00:09:22.886 "nvme_io_md": false, 00:09:22.886 "write_zeroes": true, 00:09:22.886 "zcopy": false, 00:09:22.886 "get_zone_info": false, 00:09:22.886 "zone_management": false, 00:09:22.886 "zone_append": false, 00:09:22.886 "compare": false, 00:09:22.886 "compare_and_write": false, 00:09:22.886 "abort": false, 00:09:22.886 "seek_hole": false, 00:09:22.886 "seek_data": false, 00:09:22.886 "copy": false, 00:09:22.886 "nvme_iov_md": false 00:09:22.886 }, 00:09:22.886 "memory_domains": [ 00:09:22.886 { 00:09:22.886 "dma_device_id": "system", 00:09:22.886 "dma_device_type": 1 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.886 "dma_device_type": 2 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "system", 00:09:22.886 "dma_device_type": 1 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.886 "dma_device_type": 2 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "system", 00:09:22.886 "dma_device_type": 1 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.886 "dma_device_type": 2 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "system", 00:09:22.886 "dma_device_type": 1 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.886 "dma_device_type": 2 00:09:22.886 } 00:09:22.886 ], 00:09:22.886 "driver_specific": { 00:09:22.886 "raid": { 00:09:22.886 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:22.886 "strip_size_kb": 64, 00:09:22.886 "state": "online", 00:09:22.886 "raid_level": "concat", 00:09:22.886 "superblock": true, 00:09:22.886 "num_base_bdevs": 4, 00:09:22.886 "num_base_bdevs_discovered": 4, 00:09:22.886 "num_base_bdevs_operational": 4, 00:09:22.886 "base_bdevs_list": [ 00:09:22.886 { 00:09:22.886 "name": "pt1", 00:09:22.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.886 "is_configured": true, 00:09:22.886 "data_offset": 2048, 00:09:22.886 "data_size": 63488 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "name": "pt2", 00:09:22.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.886 "is_configured": true, 00:09:22.886 "data_offset": 2048, 00:09:22.886 "data_size": 63488 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "name": "pt3", 00:09:22.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.886 "is_configured": true, 00:09:22.886 "data_offset": 2048, 00:09:22.886 "data_size": 63488 00:09:22.886 }, 00:09:22.886 { 00:09:22.886 "name": "pt4", 00:09:22.887 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:22.887 "is_configured": true, 00:09:22.887 "data_offset": 2048, 00:09:22.887 "data_size": 63488 00:09:22.887 } 00:09:22.887 ] 00:09:22.887 } 00:09:22.887 } 00:09:22.887 }' 00:09:22.887 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.145 pt2 00:09:23.145 pt3 00:09:23.145 pt4' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:23.145 [2024-10-30 09:44:01.682696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b443cb91-84f7-45a6-a000-cea202c35c8a 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b443cb91-84f7-45a6-a000-cea202c35c8a ']' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 [2024-10-30 09:44:01.718379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.145 [2024-10-30 09:44:01.718400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.145 [2024-10-30 09:44:01.718461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.145 [2024-10-30 09:44:01.718528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.145 [2024-10-30 09:44:01.718541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.145 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:23.403 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.404 [2024-10-30 09:44:01.834441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:23.404 [2024-10-30 09:44:01.836276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:23.404 [2024-10-30 09:44:01.836322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:23.404 [2024-10-30 09:44:01.836356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:23.404 [2024-10-30 09:44:01.836401] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:23.404 [2024-10-30 09:44:01.836446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:23.404 [2024-10-30 09:44:01.836465] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:23.404 [2024-10-30 09:44:01.836483] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:23.404 [2024-10-30 09:44:01.836496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.404 [2024-10-30 09:44:01.836507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:23.404 request: 00:09:23.404 { 00:09:23.404 "name": "raid_bdev1", 00:09:23.404 "raid_level": "concat", 00:09:23.404 "base_bdevs": [ 00:09:23.404 "malloc1", 00:09:23.404 "malloc2", 00:09:23.404 "malloc3", 00:09:23.404 "malloc4" 00:09:23.404 ], 00:09:23.404 "strip_size_kb": 64, 00:09:23.404 "superblock": false, 00:09:23.404 "method": "bdev_raid_create", 00:09:23.404 "req_id": 1 00:09:23.404 } 00:09:23.404 Got JSON-RPC error response 00:09:23.404 response: 00:09:23.404 { 00:09:23.404 "code": -17, 00:09:23.404 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:23.404 } 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.404 [2024-10-30 09:44:01.878416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.404 [2024-10-30 09:44:01.878458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.404 [2024-10-30 09:44:01.878472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:23.404 [2024-10-30 09:44:01.878482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.404 [2024-10-30 09:44:01.880554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.404 [2024-10-30 09:44:01.880591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.404 [2024-10-30 09:44:01.880652] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:23.404 [2024-10-30 09:44:01.880700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.404 pt1 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.404 "name": "raid_bdev1", 00:09:23.404 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:23.404 "strip_size_kb": 64, 00:09:23.404 "state": "configuring", 00:09:23.404 "raid_level": "concat", 00:09:23.404 "superblock": true, 00:09:23.404 "num_base_bdevs": 4, 00:09:23.404 "num_base_bdevs_discovered": 1, 00:09:23.404 "num_base_bdevs_operational": 4, 00:09:23.404 "base_bdevs_list": [ 00:09:23.404 { 00:09:23.404 "name": "pt1", 00:09:23.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.404 "is_configured": true, 00:09:23.404 "data_offset": 2048, 00:09:23.404 "data_size": 63488 00:09:23.404 }, 00:09:23.404 { 00:09:23.404 "name": null, 00:09:23.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.404 "is_configured": false, 00:09:23.404 "data_offset": 2048, 00:09:23.404 "data_size": 63488 00:09:23.404 }, 00:09:23.404 { 00:09:23.404 "name": null, 00:09:23.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.404 "is_configured": false, 00:09:23.404 "data_offset": 2048, 00:09:23.404 "data_size": 63488 00:09:23.404 }, 00:09:23.404 { 00:09:23.404 "name": null, 00:09:23.404 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:23.404 "is_configured": false, 00:09:23.404 "data_offset": 2048, 00:09:23.404 "data_size": 63488 00:09:23.404 } 00:09:23.404 ] 00:09:23.404 }' 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.404 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 [2024-10-30 09:44:02.178512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.662 [2024-10-30 09:44:02.178572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.662 [2024-10-30 09:44:02.178591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:23.662 [2024-10-30 09:44:02.178601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.662 [2024-10-30 09:44:02.178981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.662 [2024-10-30 09:44:02.179002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.662 [2024-10-30 09:44:02.179079] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:23.662 [2024-10-30 09:44:02.179208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.662 pt2 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 [2024-10-30 09:44:02.190522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.662 "name": "raid_bdev1", 00:09:23.662 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:23.662 "strip_size_kb": 64, 00:09:23.662 "state": "configuring", 00:09:23.662 "raid_level": "concat", 00:09:23.662 "superblock": true, 00:09:23.662 "num_base_bdevs": 4, 00:09:23.662 "num_base_bdevs_discovered": 1, 00:09:23.662 "num_base_bdevs_operational": 4, 00:09:23.662 "base_bdevs_list": [ 00:09:23.662 { 00:09:23.662 "name": "pt1", 00:09:23.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.662 "is_configured": true, 00:09:23.662 "data_offset": 2048, 00:09:23.662 "data_size": 63488 00:09:23.662 }, 00:09:23.662 { 00:09:23.662 "name": null, 00:09:23.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.662 "is_configured": false, 00:09:23.662 "data_offset": 0, 00:09:23.662 "data_size": 63488 00:09:23.662 }, 00:09:23.662 { 00:09:23.662 "name": null, 00:09:23.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.662 "is_configured": false, 00:09:23.662 "data_offset": 2048, 00:09:23.662 "data_size": 63488 00:09:23.662 }, 00:09:23.662 { 00:09:23.662 "name": null, 00:09:23.662 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:23.662 "is_configured": false, 00:09:23.662 "data_offset": 2048, 00:09:23.662 "data_size": 63488 00:09:23.662 } 00:09:23.662 ] 00:09:23.662 }' 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.662 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 [2024-10-30 09:44:02.502588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.921 [2024-10-30 09:44:02.502639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.921 [2024-10-30 09:44:02.502656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:23.921 [2024-10-30 09:44:02.502665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.921 [2024-10-30 09:44:02.503052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.921 [2024-10-30 09:44:02.503082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.921 [2024-10-30 09:44:02.503151] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:23.921 [2024-10-30 09:44:02.503169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.921 pt2 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.921 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 [2024-10-30 09:44:02.510575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:23.921 [2024-10-30 09:44:02.510616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.921 [2024-10-30 09:44:02.510635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:23.921 [2024-10-30 09:44:02.510645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.922 [2024-10-30 09:44:02.510981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.922 [2024-10-30 09:44:02.510999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:23.922 [2024-10-30 09:44:02.511054] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:23.922 [2024-10-30 09:44:02.511084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:23.922 pt3 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 [2024-10-30 09:44:02.518555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:23.922 [2024-10-30 09:44:02.518593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.922 [2024-10-30 09:44:02.518607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:23.922 [2024-10-30 09:44:02.518615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.922 [2024-10-30 09:44:02.518940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.922 [2024-10-30 09:44:02.518961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:23.922 [2024-10-30 09:44:02.519012] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:23.922 [2024-10-30 09:44:02.519032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:23.922 [2024-10-30 09:44:02.519167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.922 [2024-10-30 09:44:02.519180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:23.922 [2024-10-30 09:44:02.519419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:23.922 [2024-10-30 09:44:02.519540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.922 [2024-10-30 09:44:02.519549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:23.922 [2024-10-30 09:44:02.519660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.922 pt4 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.922 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.179 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.179 "name": "raid_bdev1", 00:09:24.179 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:24.179 "strip_size_kb": 64, 00:09:24.179 "state": "online", 00:09:24.179 "raid_level": "concat", 00:09:24.179 "superblock": true, 00:09:24.179 "num_base_bdevs": 4, 00:09:24.179 "num_base_bdevs_discovered": 4, 00:09:24.179 "num_base_bdevs_operational": 4, 00:09:24.179 "base_bdevs_list": [ 00:09:24.179 { 00:09:24.179 "name": "pt1", 00:09:24.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 }, 00:09:24.179 { 00:09:24.179 "name": "pt2", 00:09:24.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 }, 00:09:24.179 { 00:09:24.179 "name": "pt3", 00:09:24.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 }, 00:09:24.179 { 00:09:24.179 "name": "pt4", 00:09:24.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 } 00:09:24.179 ] 00:09:24.179 }' 00:09:24.179 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.179 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.439 [2024-10-30 09:44:02.851028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.439 "name": "raid_bdev1", 00:09:24.439 "aliases": [ 00:09:24.439 "b443cb91-84f7-45a6-a000-cea202c35c8a" 00:09:24.439 ], 00:09:24.439 "product_name": "Raid Volume", 00:09:24.439 "block_size": 512, 00:09:24.439 "num_blocks": 253952, 00:09:24.439 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:24.439 "assigned_rate_limits": { 00:09:24.439 "rw_ios_per_sec": 0, 00:09:24.439 "rw_mbytes_per_sec": 0, 00:09:24.439 "r_mbytes_per_sec": 0, 00:09:24.439 "w_mbytes_per_sec": 0 00:09:24.439 }, 00:09:24.439 "claimed": false, 00:09:24.439 "zoned": false, 00:09:24.439 "supported_io_types": { 00:09:24.439 "read": true, 00:09:24.439 "write": true, 00:09:24.439 "unmap": true, 00:09:24.439 "flush": true, 00:09:24.439 "reset": true, 00:09:24.439 "nvme_admin": false, 00:09:24.439 "nvme_io": false, 00:09:24.439 "nvme_io_md": false, 00:09:24.439 "write_zeroes": true, 00:09:24.439 "zcopy": false, 00:09:24.439 "get_zone_info": false, 00:09:24.439 "zone_management": false, 00:09:24.439 "zone_append": false, 00:09:24.439 "compare": false, 00:09:24.439 "compare_and_write": false, 00:09:24.439 "abort": false, 00:09:24.439 "seek_hole": false, 00:09:24.439 "seek_data": false, 00:09:24.439 "copy": false, 00:09:24.439 "nvme_iov_md": false 00:09:24.439 }, 00:09:24.439 "memory_domains": [ 00:09:24.439 { 00:09:24.439 "dma_device_id": "system", 00:09:24.439 "dma_device_type": 1 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.439 "dma_device_type": 2 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "system", 00:09:24.439 "dma_device_type": 1 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.439 "dma_device_type": 2 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "system", 00:09:24.439 "dma_device_type": 1 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.439 "dma_device_type": 2 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "system", 00:09:24.439 "dma_device_type": 1 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.439 "dma_device_type": 2 00:09:24.439 } 00:09:24.439 ], 00:09:24.439 "driver_specific": { 00:09:24.439 "raid": { 00:09:24.439 "uuid": "b443cb91-84f7-45a6-a000-cea202c35c8a", 00:09:24.439 "strip_size_kb": 64, 00:09:24.439 "state": "online", 00:09:24.439 "raid_level": "concat", 00:09:24.439 "superblock": true, 00:09:24.439 "num_base_bdevs": 4, 00:09:24.439 "num_base_bdevs_discovered": 4, 00:09:24.439 "num_base_bdevs_operational": 4, 00:09:24.439 "base_bdevs_list": [ 00:09:24.439 { 00:09:24.439 "name": "pt1", 00:09:24.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.439 "is_configured": true, 00:09:24.439 "data_offset": 2048, 00:09:24.439 "data_size": 63488 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "name": "pt2", 00:09:24.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.439 "is_configured": true, 00:09:24.439 "data_offset": 2048, 00:09:24.439 "data_size": 63488 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "name": "pt3", 00:09:24.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.439 "is_configured": true, 00:09:24.439 "data_offset": 2048, 00:09:24.439 "data_size": 63488 00:09:24.439 }, 00:09:24.439 { 00:09:24.439 "name": "pt4", 00:09:24.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.439 "is_configured": true, 00:09:24.439 "data_offset": 2048, 00:09:24.439 "data_size": 63488 00:09:24.439 } 00:09:24.439 ] 00:09:24.439 } 00:09:24.439 } 00:09:24.439 }' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:24.439 pt2 00:09:24.439 pt3 00:09:24.439 pt4' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.439 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.440 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.440 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.697 [2024-10-30 09:44:03.079009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b443cb91-84f7-45a6-a000-cea202c35c8a '!=' b443cb91-84f7-45a6-a000-cea202c35c8a ']' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70839 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70839 ']' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70839 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70839 00:09:24.697 killing process with pid 70839 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70839' 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70839 00:09:24.697 [2024-10-30 09:44:03.115516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.697 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70839 00:09:24.697 [2024-10-30 09:44:03.115582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.697 [2024-10-30 09:44:03.115652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.697 [2024-10-30 09:44:03.115661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:24.955 [2024-10-30 09:44:03.352688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.521 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:25.521 00:09:25.521 real 0m3.844s 00:09:25.521 user 0m5.601s 00:09:25.521 sys 0m0.605s 00:09:25.521 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.521 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.521 ************************************ 00:09:25.521 END TEST raid_superblock_test 00:09:25.521 ************************************ 00:09:25.521 09:44:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:25.521 09:44:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:25.521 09:44:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.521 09:44:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.521 ************************************ 00:09:25.521 START TEST raid_read_error_test 00:09:25.521 ************************************ 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h4d2E07N0I 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71082 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71082 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71082 ']' 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.521 09:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.521 [2024-10-30 09:44:04.030362] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:25.521 [2024-10-30 09:44:04.030479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:09:25.779 [2024-10-30 09:44:04.184030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.779 [2024-10-30 09:44:04.264155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.779 [2024-10-30 09:44:04.372832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.779 [2024-10-30 09:44:04.372864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.345 BaseBdev1_malloc 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.345 true 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.345 [2024-10-30 09:44:04.900995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.345 [2024-10-30 09:44:04.901040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.345 [2024-10-30 09:44:04.901068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.345 [2024-10-30 09:44:04.901078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.345 [2024-10-30 09:44:04.902807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.345 [2024-10-30 09:44:04.902841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.345 BaseBdev1 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.345 BaseBdev2_malloc 00:09:26.345 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.346 true 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.346 [2024-10-30 09:44:04.940213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.346 [2024-10-30 09:44:04.940251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.346 [2024-10-30 09:44:04.940263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.346 [2024-10-30 09:44:04.940271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.346 [2024-10-30 09:44:04.942009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.346 [2024-10-30 09:44:04.942040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.346 BaseBdev2 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.346 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 BaseBdev3_malloc 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 true 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 [2024-10-30 09:44:04.992591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.604 [2024-10-30 09:44:04.992631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.604 [2024-10-30 09:44:04.992644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.604 [2024-10-30 09:44:04.992652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.604 [2024-10-30 09:44:04.994401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.604 [2024-10-30 09:44:04.994429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.604 BaseBdev3 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 09:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 BaseBdev4_malloc 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 true 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 [2024-10-30 09:44:05.031773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:26.604 [2024-10-30 09:44:05.031805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.604 [2024-10-30 09:44:05.031818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:26.604 [2024-10-30 09:44:05.031826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.604 [2024-10-30 09:44:05.033549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.604 [2024-10-30 09:44:05.033577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:26.604 BaseBdev4 00:09:26.604 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.605 [2024-10-30 09:44:05.039827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.605 [2024-10-30 09:44:05.041405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.605 [2024-10-30 09:44:05.041469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.605 [2024-10-30 09:44:05.041526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:26.605 [2024-10-30 09:44:05.041706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:26.605 [2024-10-30 09:44:05.041722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:26.605 [2024-10-30 09:44:05.041918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:26.605 [2024-10-30 09:44:05.042041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:26.605 [2024-10-30 09:44:05.042064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:26.605 [2024-10-30 09:44:05.042180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.605 "name": "raid_bdev1", 00:09:26.605 "uuid": "1de89c6e-79d3-4e03-a4c5-ff8c382aaeff", 00:09:26.605 "strip_size_kb": 64, 00:09:26.605 "state": "online", 00:09:26.605 "raid_level": "concat", 00:09:26.605 "superblock": true, 00:09:26.605 "num_base_bdevs": 4, 00:09:26.605 "num_base_bdevs_discovered": 4, 00:09:26.605 "num_base_bdevs_operational": 4, 00:09:26.605 "base_bdevs_list": [ 00:09:26.605 { 00:09:26.605 "name": "BaseBdev1", 00:09:26.605 "uuid": "da94e7b7-0113-5e36-bbe2-0c6f850639bf", 00:09:26.605 "is_configured": true, 00:09:26.605 "data_offset": 2048, 00:09:26.605 "data_size": 63488 00:09:26.605 }, 00:09:26.605 { 00:09:26.605 "name": "BaseBdev2", 00:09:26.605 "uuid": "37049f8f-604a-510f-8343-c7873f633fae", 00:09:26.605 "is_configured": true, 00:09:26.605 "data_offset": 2048, 00:09:26.605 "data_size": 63488 00:09:26.605 }, 00:09:26.605 { 00:09:26.605 "name": "BaseBdev3", 00:09:26.605 "uuid": "f8cd08f6-3a31-5e88-a9fb-7b02960b1914", 00:09:26.605 "is_configured": true, 00:09:26.605 "data_offset": 2048, 00:09:26.605 "data_size": 63488 00:09:26.605 }, 00:09:26.605 { 00:09:26.605 "name": "BaseBdev4", 00:09:26.605 "uuid": "24f9e4b4-ec6a-5d60-b943-31985943a063", 00:09:26.605 "is_configured": true, 00:09:26.605 "data_offset": 2048, 00:09:26.605 "data_size": 63488 00:09:26.605 } 00:09:26.605 ] 00:09:26.605 }' 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.605 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.863 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.863 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.863 [2024-10-30 09:44:05.440650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.796 "name": "raid_bdev1", 00:09:27.796 "uuid": "1de89c6e-79d3-4e03-a4c5-ff8c382aaeff", 00:09:27.796 "strip_size_kb": 64, 00:09:27.796 "state": "online", 00:09:27.796 "raid_level": "concat", 00:09:27.796 "superblock": true, 00:09:27.796 "num_base_bdevs": 4, 00:09:27.796 "num_base_bdevs_discovered": 4, 00:09:27.796 "num_base_bdevs_operational": 4, 00:09:27.796 "base_bdevs_list": [ 00:09:27.796 { 00:09:27.796 "name": "BaseBdev1", 00:09:27.796 "uuid": "da94e7b7-0113-5e36-bbe2-0c6f850639bf", 00:09:27.796 "is_configured": true, 00:09:27.796 "data_offset": 2048, 00:09:27.796 "data_size": 63488 00:09:27.796 }, 00:09:27.796 { 00:09:27.796 "name": "BaseBdev2", 00:09:27.796 "uuid": "37049f8f-604a-510f-8343-c7873f633fae", 00:09:27.796 "is_configured": true, 00:09:27.796 "data_offset": 2048, 00:09:27.796 "data_size": 63488 00:09:27.796 }, 00:09:27.796 { 00:09:27.796 "name": "BaseBdev3", 00:09:27.796 "uuid": "f8cd08f6-3a31-5e88-a9fb-7b02960b1914", 00:09:27.796 "is_configured": true, 00:09:27.796 "data_offset": 2048, 00:09:27.796 "data_size": 63488 00:09:27.796 }, 00:09:27.796 { 00:09:27.796 "name": "BaseBdev4", 00:09:27.796 "uuid": "24f9e4b4-ec6a-5d60-b943-31985943a063", 00:09:27.796 "is_configured": true, 00:09:27.796 "data_offset": 2048, 00:09:27.796 "data_size": 63488 00:09:27.796 } 00:09:27.796 ] 00:09:27.796 }' 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.796 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.054 [2024-10-30 09:44:06.663624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.054 [2024-10-30 09:44:06.663654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.054 [2024-10-30 09:44:06.665997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.054 [2024-10-30 09:44:06.666051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.054 [2024-10-30 09:44:06.666101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.054 [2024-10-30 09:44:06.666113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:28.054 { 00:09:28.054 "results": [ 00:09:28.054 { 00:09:28.054 "job": "raid_bdev1", 00:09:28.054 "core_mask": "0x1", 00:09:28.054 "workload": "randrw", 00:09:28.054 "percentage": 50, 00:09:28.054 "status": "finished", 00:09:28.054 "queue_depth": 1, 00:09:28.054 "io_size": 131072, 00:09:28.054 "runtime": 1.221456, 00:09:28.054 "iops": 18369.87988105998, 00:09:28.054 "mibps": 2296.2349851324975, 00:09:28.054 "io_failed": 1, 00:09:28.054 "io_timeout": 0, 00:09:28.054 "avg_latency_us": 74.54032107559983, 00:09:28.054 "min_latency_us": 26.38769230769231, 00:09:28.054 "max_latency_us": 1348.5292307692307 00:09:28.054 } 00:09:28.054 ], 00:09:28.054 "core_count": 1 00:09:28.054 } 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71082 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71082 ']' 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71082 00:09:28.054 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71082 00:09:28.312 killing process with pid 71082 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71082' 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71082 00:09:28.312 [2024-10-30 09:44:06.695357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.312 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71082 00:09:28.312 [2024-10-30 09:44:06.850625] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h4d2E07N0I 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:09:28.879 00:09:28.879 real 0m3.479s 00:09:28.879 user 0m4.160s 00:09:28.879 sys 0m0.366s 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.879 ************************************ 00:09:28.879 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.879 END TEST raid_read_error_test 00:09:28.879 ************************************ 00:09:28.879 09:44:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:28.879 09:44:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:28.879 09:44:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.879 09:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.879 ************************************ 00:09:28.879 START TEST raid_write_error_test 00:09:28.879 ************************************ 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.879 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.R9mbg8HJPg 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71221 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71221 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71221 ']' 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.880 09:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:29.138 [2024-10-30 09:44:07.544039] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:29.138 [2024-10-30 09:44:07.544167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71221 ] 00:09:29.138 [2024-10-30 09:44:07.699184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.395 [2024-10-30 09:44:07.782523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.395 [2024-10-30 09:44:07.891898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.395 [2024-10-30 09:44:07.891941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 BaseBdev1_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 true 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 [2024-10-30 09:44:08.444745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.959 [2024-10-30 09:44:08.444794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.959 [2024-10-30 09:44:08.444817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.959 [2024-10-30 09:44:08.444826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.959 [2024-10-30 09:44:08.446561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.959 [2024-10-30 09:44:08.446595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.959 BaseBdev1 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 BaseBdev2_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 true 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 [2024-10-30 09:44:08.483988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.959 [2024-10-30 09:44:08.484032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.959 [2024-10-30 09:44:08.484044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.959 [2024-10-30 09:44:08.484053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.959 [2024-10-30 09:44:08.485769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.959 [2024-10-30 09:44:08.485801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.959 BaseBdev2 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.959 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.960 BaseBdev3_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.960 true 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.960 [2024-10-30 09:44:08.544224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:29.960 [2024-10-30 09:44:08.544264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.960 [2024-10-30 09:44:08.544278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:29.960 [2024-10-30 09:44:08.544286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.960 [2024-10-30 09:44:08.546012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.960 [2024-10-30 09:44:08.546044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:29.960 BaseBdev3 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.960 BaseBdev4_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.960 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.217 true 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.217 [2024-10-30 09:44:08.583361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:30.217 [2024-10-30 09:44:08.583399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.217 [2024-10-30 09:44:08.583412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:30.217 [2024-10-30 09:44:08.583421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.217 [2024-10-30 09:44:08.585113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.217 [2024-10-30 09:44:08.585143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:30.217 BaseBdev4 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.217 [2024-10-30 09:44:08.591424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.217 [2024-10-30 09:44:08.592947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.217 [2024-10-30 09:44:08.593013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.217 [2024-10-30 09:44:08.593079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.217 [2024-10-30 09:44:08.593265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:30.217 [2024-10-30 09:44:08.593278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:30.217 [2024-10-30 09:44:08.593479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:30.217 [2024-10-30 09:44:08.593603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:30.217 [2024-10-30 09:44:08.593612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:30.217 [2024-10-30 09:44:08.593728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.217 "name": "raid_bdev1", 00:09:30.217 "uuid": "b2742d1f-1ccc-42bc-afca-9d55542372b8", 00:09:30.217 "strip_size_kb": 64, 00:09:30.217 "state": "online", 00:09:30.217 "raid_level": "concat", 00:09:30.217 "superblock": true, 00:09:30.217 "num_base_bdevs": 4, 00:09:30.217 "num_base_bdevs_discovered": 4, 00:09:30.217 "num_base_bdevs_operational": 4, 00:09:30.217 "base_bdevs_list": [ 00:09:30.217 { 00:09:30.217 "name": "BaseBdev1", 00:09:30.217 "uuid": "09ac516a-b00f-5803-b8c1-a838bd84a83f", 00:09:30.217 "is_configured": true, 00:09:30.217 "data_offset": 2048, 00:09:30.217 "data_size": 63488 00:09:30.217 }, 00:09:30.217 { 00:09:30.217 "name": "BaseBdev2", 00:09:30.217 "uuid": "e77e6bb4-dd5f-52fa-9f7c-c94ecd52a88c", 00:09:30.217 "is_configured": true, 00:09:30.217 "data_offset": 2048, 00:09:30.217 "data_size": 63488 00:09:30.217 }, 00:09:30.217 { 00:09:30.217 "name": "BaseBdev3", 00:09:30.217 "uuid": "9756cde8-9f85-5d21-9494-90f0eb309892", 00:09:30.217 "is_configured": true, 00:09:30.217 "data_offset": 2048, 00:09:30.217 "data_size": 63488 00:09:30.217 }, 00:09:30.217 { 00:09:30.217 "name": "BaseBdev4", 00:09:30.217 "uuid": "18c4336b-a7ab-5f54-afad-445dbf6ace40", 00:09:30.217 "is_configured": true, 00:09:30.217 "data_offset": 2048, 00:09:30.217 "data_size": 63488 00:09:30.217 } 00:09:30.217 ] 00:09:30.217 }' 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.217 09:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.475 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.475 09:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.475 [2024-10-30 09:44:08.984245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.406 "name": "raid_bdev1", 00:09:31.406 "uuid": "b2742d1f-1ccc-42bc-afca-9d55542372b8", 00:09:31.406 "strip_size_kb": 64, 00:09:31.406 "state": "online", 00:09:31.406 "raid_level": "concat", 00:09:31.406 "superblock": true, 00:09:31.406 "num_base_bdevs": 4, 00:09:31.406 "num_base_bdevs_discovered": 4, 00:09:31.406 "num_base_bdevs_operational": 4, 00:09:31.406 "base_bdevs_list": [ 00:09:31.406 { 00:09:31.406 "name": "BaseBdev1", 00:09:31.406 "uuid": "09ac516a-b00f-5803-b8c1-a838bd84a83f", 00:09:31.406 "is_configured": true, 00:09:31.406 "data_offset": 2048, 00:09:31.406 "data_size": 63488 00:09:31.406 }, 00:09:31.406 { 00:09:31.406 "name": "BaseBdev2", 00:09:31.406 "uuid": "e77e6bb4-dd5f-52fa-9f7c-c94ecd52a88c", 00:09:31.406 "is_configured": true, 00:09:31.406 "data_offset": 2048, 00:09:31.406 "data_size": 63488 00:09:31.406 }, 00:09:31.406 { 00:09:31.406 "name": "BaseBdev3", 00:09:31.406 "uuid": "9756cde8-9f85-5d21-9494-90f0eb309892", 00:09:31.406 "is_configured": true, 00:09:31.406 "data_offset": 2048, 00:09:31.406 "data_size": 63488 00:09:31.406 }, 00:09:31.406 { 00:09:31.406 "name": "BaseBdev4", 00:09:31.406 "uuid": "18c4336b-a7ab-5f54-afad-445dbf6ace40", 00:09:31.406 "is_configured": true, 00:09:31.406 "data_offset": 2048, 00:09:31.406 "data_size": 63488 00:09:31.406 } 00:09:31.406 ] 00:09:31.406 }' 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.406 09:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.663 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.663 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.663 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.663 [2024-10-30 09:44:10.227719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.663 [2024-10-30 09:44:10.227749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.663 [2024-10-30 09:44:10.230161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.663 [2024-10-30 09:44:10.230215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.663 [2024-10-30 09:44:10.230251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.663 [2024-10-30 09:44:10.230263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:31.663 { 00:09:31.663 "results": [ 00:09:31.663 { 00:09:31.663 "job": "raid_bdev1", 00:09:31.663 "core_mask": "0x1", 00:09:31.663 "workload": "randrw", 00:09:31.663 "percentage": 50, 00:09:31.663 "status": "finished", 00:09:31.663 "queue_depth": 1, 00:09:31.663 "io_size": 131072, 00:09:31.663 "runtime": 1.241962, 00:09:31.663 "iops": 18421.658633678002, 00:09:31.663 "mibps": 2302.7073292097502, 00:09:31.663 "io_failed": 1, 00:09:31.663 "io_timeout": 0, 00:09:31.663 "avg_latency_us": 74.37477783754707, 00:09:31.663 "min_latency_us": 25.403076923076924, 00:09:31.663 "max_latency_us": 1354.8307692307692 00:09:31.663 } 00:09:31.663 ], 00:09:31.664 "core_count": 1 00:09:31.664 } 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71221 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71221 ']' 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71221 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71221 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.664 killing process with pid 71221 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71221' 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71221 00:09:31.664 [2024-10-30 09:44:10.253900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.664 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71221 00:09:31.926 [2024-10-30 09:44:10.409279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.491 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.R9mbg8HJPg 00:09:32.491 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.491 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:09:32.491 00:09:32.491 real 0m3.535s 00:09:32.491 user 0m4.237s 00:09:32.491 sys 0m0.378s 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:32.491 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.491 ************************************ 00:09:32.491 END TEST raid_write_error_test 00:09:32.491 ************************************ 00:09:32.491 09:44:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.491 09:44:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:32.491 09:44:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:32.491 09:44:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.491 09:44:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.491 ************************************ 00:09:32.491 START TEST raid_state_function_test 00:09:32.491 ************************************ 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.491 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71349 00:09:32.492 Process raid pid: 71349 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71349' 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71349 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71349 ']' 00:09:32.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.492 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.749 [2024-10-30 09:44:11.116217] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:32.749 [2024-10-30 09:44:11.116335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.749 [2024-10-30 09:44:11.271650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.749 [2024-10-30 09:44:11.351677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.007 [2024-10-30 09:44:11.460794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.007 [2024-10-30 09:44:11.460835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 [2024-10-30 09:44:11.913561] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.624 [2024-10-30 09:44:11.913605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.624 [2024-10-30 09:44:11.913613] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.624 [2024-10-30 09:44:11.913621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.624 [2024-10-30 09:44:11.913626] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.624 [2024-10-30 09:44:11.913632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.624 [2024-10-30 09:44:11.913637] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.624 [2024-10-30 09:44:11.913644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.624 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.625 "name": "Existed_Raid", 00:09:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.625 "strip_size_kb": 0, 00:09:33.625 "state": "configuring", 00:09:33.625 "raid_level": "raid1", 00:09:33.625 "superblock": false, 00:09:33.625 "num_base_bdevs": 4, 00:09:33.625 "num_base_bdevs_discovered": 0, 00:09:33.625 "num_base_bdevs_operational": 4, 00:09:33.625 "base_bdevs_list": [ 00:09:33.625 { 00:09:33.625 "name": "BaseBdev1", 00:09:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.625 "is_configured": false, 00:09:33.625 "data_offset": 0, 00:09:33.625 "data_size": 0 00:09:33.625 }, 00:09:33.625 { 00:09:33.625 "name": "BaseBdev2", 00:09:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.625 "is_configured": false, 00:09:33.625 "data_offset": 0, 00:09:33.625 "data_size": 0 00:09:33.625 }, 00:09:33.625 { 00:09:33.625 "name": "BaseBdev3", 00:09:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.625 "is_configured": false, 00:09:33.625 "data_offset": 0, 00:09:33.625 "data_size": 0 00:09:33.625 }, 00:09:33.625 { 00:09:33.625 "name": "BaseBdev4", 00:09:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.625 "is_configured": false, 00:09:33.625 "data_offset": 0, 00:09:33.625 "data_size": 0 00:09:33.625 } 00:09:33.625 ] 00:09:33.625 }' 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.625 09:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 [2024-10-30 09:44:12.233575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.625 [2024-10-30 09:44:12.233607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.625 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 [2024-10-30 09:44:12.241585] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.625 [2024-10-30 09:44:12.241616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.625 [2024-10-30 09:44:12.241623] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.625 [2024-10-30 09:44:12.241630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.625 [2024-10-30 09:44:12.241635] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.625 [2024-10-30 09:44:12.241642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.625 [2024-10-30 09:44:12.241647] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.625 [2024-10-30 09:44:12.241654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.883 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.884 [2024-10-30 09:44:12.269217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.884 BaseBdev1 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.884 [ 00:09:33.884 { 00:09:33.884 "name": "BaseBdev1", 00:09:33.884 "aliases": [ 00:09:33.884 "6457ef9d-ec8c-49d0-b104-bbc81f8090cc" 00:09:33.884 ], 00:09:33.884 "product_name": "Malloc disk", 00:09:33.884 "block_size": 512, 00:09:33.884 "num_blocks": 65536, 00:09:33.884 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:33.884 "assigned_rate_limits": { 00:09:33.884 "rw_ios_per_sec": 0, 00:09:33.884 "rw_mbytes_per_sec": 0, 00:09:33.884 "r_mbytes_per_sec": 0, 00:09:33.884 "w_mbytes_per_sec": 0 00:09:33.884 }, 00:09:33.884 "claimed": true, 00:09:33.884 "claim_type": "exclusive_write", 00:09:33.884 "zoned": false, 00:09:33.884 "supported_io_types": { 00:09:33.884 "read": true, 00:09:33.884 "write": true, 00:09:33.884 "unmap": true, 00:09:33.884 "flush": true, 00:09:33.884 "reset": true, 00:09:33.884 "nvme_admin": false, 00:09:33.884 "nvme_io": false, 00:09:33.884 "nvme_io_md": false, 00:09:33.884 "write_zeroes": true, 00:09:33.884 "zcopy": true, 00:09:33.884 "get_zone_info": false, 00:09:33.884 "zone_management": false, 00:09:33.884 "zone_append": false, 00:09:33.884 "compare": false, 00:09:33.884 "compare_and_write": false, 00:09:33.884 "abort": true, 00:09:33.884 "seek_hole": false, 00:09:33.884 "seek_data": false, 00:09:33.884 "copy": true, 00:09:33.884 "nvme_iov_md": false 00:09:33.884 }, 00:09:33.884 "memory_domains": [ 00:09:33.884 { 00:09:33.884 "dma_device_id": "system", 00:09:33.884 "dma_device_type": 1 00:09:33.884 }, 00:09:33.884 { 00:09:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.884 "dma_device_type": 2 00:09:33.884 } 00:09:33.884 ], 00:09:33.884 "driver_specific": {} 00:09:33.884 } 00:09:33.884 ] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.884 "name": "Existed_Raid", 00:09:33.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.884 "strip_size_kb": 0, 00:09:33.884 "state": "configuring", 00:09:33.884 "raid_level": "raid1", 00:09:33.884 "superblock": false, 00:09:33.884 "num_base_bdevs": 4, 00:09:33.884 "num_base_bdevs_discovered": 1, 00:09:33.884 "num_base_bdevs_operational": 4, 00:09:33.884 "base_bdevs_list": [ 00:09:33.884 { 00:09:33.884 "name": "BaseBdev1", 00:09:33.884 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:33.884 "is_configured": true, 00:09:33.884 "data_offset": 0, 00:09:33.884 "data_size": 65536 00:09:33.884 }, 00:09:33.884 { 00:09:33.884 "name": "BaseBdev2", 00:09:33.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.884 "is_configured": false, 00:09:33.884 "data_offset": 0, 00:09:33.884 "data_size": 0 00:09:33.884 }, 00:09:33.884 { 00:09:33.884 "name": "BaseBdev3", 00:09:33.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.884 "is_configured": false, 00:09:33.884 "data_offset": 0, 00:09:33.884 "data_size": 0 00:09:33.884 }, 00:09:33.884 { 00:09:33.884 "name": "BaseBdev4", 00:09:33.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.884 "is_configured": false, 00:09:33.884 "data_offset": 0, 00:09:33.884 "data_size": 0 00:09:33.884 } 00:09:33.884 ] 00:09:33.884 }' 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.884 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.143 [2024-10-30 09:44:12.605292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.143 [2024-10-30 09:44:12.605332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.143 [2024-10-30 09:44:12.613340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.143 [2024-10-30 09:44:12.614835] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.143 [2024-10-30 09:44:12.614869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.143 [2024-10-30 09:44:12.614877] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.143 [2024-10-30 09:44:12.614886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.143 [2024-10-30 09:44:12.614892] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.143 [2024-10-30 09:44:12.614898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.143 "name": "Existed_Raid", 00:09:34.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.143 "strip_size_kb": 0, 00:09:34.143 "state": "configuring", 00:09:34.143 "raid_level": "raid1", 00:09:34.143 "superblock": false, 00:09:34.143 "num_base_bdevs": 4, 00:09:34.143 "num_base_bdevs_discovered": 1, 00:09:34.143 "num_base_bdevs_operational": 4, 00:09:34.143 "base_bdevs_list": [ 00:09:34.143 { 00:09:34.143 "name": "BaseBdev1", 00:09:34.143 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:34.143 "is_configured": true, 00:09:34.143 "data_offset": 0, 00:09:34.143 "data_size": 65536 00:09:34.143 }, 00:09:34.143 { 00:09:34.143 "name": "BaseBdev2", 00:09:34.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.143 "is_configured": false, 00:09:34.143 "data_offset": 0, 00:09:34.143 "data_size": 0 00:09:34.143 }, 00:09:34.143 { 00:09:34.143 "name": "BaseBdev3", 00:09:34.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.143 "is_configured": false, 00:09:34.143 "data_offset": 0, 00:09:34.143 "data_size": 0 00:09:34.143 }, 00:09:34.143 { 00:09:34.143 "name": "BaseBdev4", 00:09:34.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.143 "is_configured": false, 00:09:34.143 "data_offset": 0, 00:09:34.143 "data_size": 0 00:09:34.143 } 00:09:34.143 ] 00:09:34.143 }' 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.143 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.402 [2024-10-30 09:44:12.943420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.402 BaseBdev2 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.402 [ 00:09:34.402 { 00:09:34.402 "name": "BaseBdev2", 00:09:34.402 "aliases": [ 00:09:34.402 "549de8f1-e38c-4370-91d9-b85fc2460dec" 00:09:34.402 ], 00:09:34.402 "product_name": "Malloc disk", 00:09:34.402 "block_size": 512, 00:09:34.402 "num_blocks": 65536, 00:09:34.402 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:34.402 "assigned_rate_limits": { 00:09:34.402 "rw_ios_per_sec": 0, 00:09:34.402 "rw_mbytes_per_sec": 0, 00:09:34.402 "r_mbytes_per_sec": 0, 00:09:34.402 "w_mbytes_per_sec": 0 00:09:34.402 }, 00:09:34.402 "claimed": true, 00:09:34.402 "claim_type": "exclusive_write", 00:09:34.402 "zoned": false, 00:09:34.402 "supported_io_types": { 00:09:34.402 "read": true, 00:09:34.402 "write": true, 00:09:34.402 "unmap": true, 00:09:34.402 "flush": true, 00:09:34.402 "reset": true, 00:09:34.402 "nvme_admin": false, 00:09:34.402 "nvme_io": false, 00:09:34.402 "nvme_io_md": false, 00:09:34.402 "write_zeroes": true, 00:09:34.402 "zcopy": true, 00:09:34.402 "get_zone_info": false, 00:09:34.402 "zone_management": false, 00:09:34.402 "zone_append": false, 00:09:34.402 "compare": false, 00:09:34.402 "compare_and_write": false, 00:09:34.402 "abort": true, 00:09:34.402 "seek_hole": false, 00:09:34.402 "seek_data": false, 00:09:34.402 "copy": true, 00:09:34.402 "nvme_iov_md": false 00:09:34.402 }, 00:09:34.402 "memory_domains": [ 00:09:34.402 { 00:09:34.402 "dma_device_id": "system", 00:09:34.402 "dma_device_type": 1 00:09:34.402 }, 00:09:34.402 { 00:09:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.402 "dma_device_type": 2 00:09:34.402 } 00:09:34.402 ], 00:09:34.402 "driver_specific": {} 00:09:34.402 } 00:09:34.402 ] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.402 09:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.402 "name": "Existed_Raid", 00:09:34.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.402 "strip_size_kb": 0, 00:09:34.402 "state": "configuring", 00:09:34.402 "raid_level": "raid1", 00:09:34.402 "superblock": false, 00:09:34.402 "num_base_bdevs": 4, 00:09:34.402 "num_base_bdevs_discovered": 2, 00:09:34.402 "num_base_bdevs_operational": 4, 00:09:34.402 "base_bdevs_list": [ 00:09:34.402 { 00:09:34.402 "name": "BaseBdev1", 00:09:34.402 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:34.402 "is_configured": true, 00:09:34.402 "data_offset": 0, 00:09:34.402 "data_size": 65536 00:09:34.402 }, 00:09:34.402 { 00:09:34.402 "name": "BaseBdev2", 00:09:34.402 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:34.402 "is_configured": true, 00:09:34.402 "data_offset": 0, 00:09:34.402 "data_size": 65536 00:09:34.402 }, 00:09:34.402 { 00:09:34.402 "name": "BaseBdev3", 00:09:34.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.402 "is_configured": false, 00:09:34.402 "data_offset": 0, 00:09:34.402 "data_size": 0 00:09:34.402 }, 00:09:34.402 { 00:09:34.402 "name": "BaseBdev4", 00:09:34.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.402 "is_configured": false, 00:09:34.402 "data_offset": 0, 00:09:34.402 "data_size": 0 00:09:34.402 } 00:09:34.402 ] 00:09:34.402 }' 00:09:34.402 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.402 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 [2024-10-30 09:44:13.328000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.968 BaseBdev3 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.968 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 [ 00:09:34.968 { 00:09:34.968 "name": "BaseBdev3", 00:09:34.968 "aliases": [ 00:09:34.968 "414e5fb3-677c-4e81-96f3-1a8269346e8f" 00:09:34.968 ], 00:09:34.968 "product_name": "Malloc disk", 00:09:34.968 "block_size": 512, 00:09:34.968 "num_blocks": 65536, 00:09:34.968 "uuid": "414e5fb3-677c-4e81-96f3-1a8269346e8f", 00:09:34.968 "assigned_rate_limits": { 00:09:34.968 "rw_ios_per_sec": 0, 00:09:34.968 "rw_mbytes_per_sec": 0, 00:09:34.968 "r_mbytes_per_sec": 0, 00:09:34.968 "w_mbytes_per_sec": 0 00:09:34.968 }, 00:09:34.968 "claimed": true, 00:09:34.968 "claim_type": "exclusive_write", 00:09:34.968 "zoned": false, 00:09:34.968 "supported_io_types": { 00:09:34.969 "read": true, 00:09:34.969 "write": true, 00:09:34.969 "unmap": true, 00:09:34.969 "flush": true, 00:09:34.969 "reset": true, 00:09:34.969 "nvme_admin": false, 00:09:34.969 "nvme_io": false, 00:09:34.969 "nvme_io_md": false, 00:09:34.969 "write_zeroes": true, 00:09:34.969 "zcopy": true, 00:09:34.969 "get_zone_info": false, 00:09:34.969 "zone_management": false, 00:09:34.969 "zone_append": false, 00:09:34.969 "compare": false, 00:09:34.969 "compare_and_write": false, 00:09:34.969 "abort": true, 00:09:34.969 "seek_hole": false, 00:09:34.969 "seek_data": false, 00:09:34.969 "copy": true, 00:09:34.969 "nvme_iov_md": false 00:09:34.969 }, 00:09:34.969 "memory_domains": [ 00:09:34.969 { 00:09:34.969 "dma_device_id": "system", 00:09:34.969 "dma_device_type": 1 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.969 "dma_device_type": 2 00:09:34.969 } 00:09:34.969 ], 00:09:34.969 "driver_specific": {} 00:09:34.969 } 00:09:34.969 ] 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.969 "name": "Existed_Raid", 00:09:34.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.969 "strip_size_kb": 0, 00:09:34.969 "state": "configuring", 00:09:34.969 "raid_level": "raid1", 00:09:34.969 "superblock": false, 00:09:34.969 "num_base_bdevs": 4, 00:09:34.969 "num_base_bdevs_discovered": 3, 00:09:34.969 "num_base_bdevs_operational": 4, 00:09:34.969 "base_bdevs_list": [ 00:09:34.969 { 00:09:34.969 "name": "BaseBdev1", 00:09:34.969 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev2", 00:09:34.969 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev3", 00:09:34.969 "uuid": "414e5fb3-677c-4e81-96f3-1a8269346e8f", 00:09:34.969 "is_configured": true, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 65536 00:09:34.969 }, 00:09:34.969 { 00:09:34.969 "name": "BaseBdev4", 00:09:34.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.969 "is_configured": false, 00:09:34.969 "data_offset": 0, 00:09:34.969 "data_size": 0 00:09:34.969 } 00:09:34.969 ] 00:09:34.969 }' 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.969 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 [2024-10-30 09:44:13.682276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.227 [2024-10-30 09:44:13.682322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.227 [2024-10-30 09:44:13.682330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.227 [2024-10-30 09:44:13.682597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.227 [2024-10-30 09:44:13.682751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.227 [2024-10-30 09:44:13.682762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.227 [2024-10-30 09:44:13.682982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.227 BaseBdev4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 [ 00:09:35.227 { 00:09:35.227 "name": "BaseBdev4", 00:09:35.227 "aliases": [ 00:09:35.227 "108de820-7c06-4ec8-9788-2d77285cebf5" 00:09:35.227 ], 00:09:35.227 "product_name": "Malloc disk", 00:09:35.227 "block_size": 512, 00:09:35.227 "num_blocks": 65536, 00:09:35.227 "uuid": "108de820-7c06-4ec8-9788-2d77285cebf5", 00:09:35.227 "assigned_rate_limits": { 00:09:35.227 "rw_ios_per_sec": 0, 00:09:35.227 "rw_mbytes_per_sec": 0, 00:09:35.227 "r_mbytes_per_sec": 0, 00:09:35.227 "w_mbytes_per_sec": 0 00:09:35.227 }, 00:09:35.227 "claimed": true, 00:09:35.227 "claim_type": "exclusive_write", 00:09:35.227 "zoned": false, 00:09:35.227 "supported_io_types": { 00:09:35.227 "read": true, 00:09:35.227 "write": true, 00:09:35.227 "unmap": true, 00:09:35.227 "flush": true, 00:09:35.227 "reset": true, 00:09:35.227 "nvme_admin": false, 00:09:35.227 "nvme_io": false, 00:09:35.227 "nvme_io_md": false, 00:09:35.227 "write_zeroes": true, 00:09:35.227 "zcopy": true, 00:09:35.227 "get_zone_info": false, 00:09:35.227 "zone_management": false, 00:09:35.227 "zone_append": false, 00:09:35.227 "compare": false, 00:09:35.227 "compare_and_write": false, 00:09:35.227 "abort": true, 00:09:35.227 "seek_hole": false, 00:09:35.227 "seek_data": false, 00:09:35.227 "copy": true, 00:09:35.227 "nvme_iov_md": false 00:09:35.227 }, 00:09:35.227 "memory_domains": [ 00:09:35.227 { 00:09:35.227 "dma_device_id": "system", 00:09:35.227 "dma_device_type": 1 00:09:35.227 }, 00:09:35.227 { 00:09:35.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.227 "dma_device_type": 2 00:09:35.227 } 00:09:35.227 ], 00:09:35.227 "driver_specific": {} 00:09:35.227 } 00:09:35.227 ] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.227 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.227 "name": "Existed_Raid", 00:09:35.227 "uuid": "dbc9fc18-eece-474e-ab6c-4a09d1966eb4", 00:09:35.227 "strip_size_kb": 0, 00:09:35.227 "state": "online", 00:09:35.227 "raid_level": "raid1", 00:09:35.228 "superblock": false, 00:09:35.228 "num_base_bdevs": 4, 00:09:35.228 "num_base_bdevs_discovered": 4, 00:09:35.228 "num_base_bdevs_operational": 4, 00:09:35.228 "base_bdevs_list": [ 00:09:35.228 { 00:09:35.228 "name": "BaseBdev1", 00:09:35.228 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:35.228 "is_configured": true, 00:09:35.228 "data_offset": 0, 00:09:35.228 "data_size": 65536 00:09:35.228 }, 00:09:35.228 { 00:09:35.228 "name": "BaseBdev2", 00:09:35.228 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:35.228 "is_configured": true, 00:09:35.228 "data_offset": 0, 00:09:35.228 "data_size": 65536 00:09:35.228 }, 00:09:35.228 { 00:09:35.228 "name": "BaseBdev3", 00:09:35.228 "uuid": "414e5fb3-677c-4e81-96f3-1a8269346e8f", 00:09:35.228 "is_configured": true, 00:09:35.228 "data_offset": 0, 00:09:35.228 "data_size": 65536 00:09:35.228 }, 00:09:35.228 { 00:09:35.228 "name": "BaseBdev4", 00:09:35.228 "uuid": "108de820-7c06-4ec8-9788-2d77285cebf5", 00:09:35.228 "is_configured": true, 00:09:35.228 "data_offset": 0, 00:09:35.228 "data_size": 65536 00:09:35.228 } 00:09:35.228 ] 00:09:35.228 }' 00:09:35.228 09:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.228 09:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.485 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.485 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.486 [2024-10-30 09:44:14.018770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.486 "name": "Existed_Raid", 00:09:35.486 "aliases": [ 00:09:35.486 "dbc9fc18-eece-474e-ab6c-4a09d1966eb4" 00:09:35.486 ], 00:09:35.486 "product_name": "Raid Volume", 00:09:35.486 "block_size": 512, 00:09:35.486 "num_blocks": 65536, 00:09:35.486 "uuid": "dbc9fc18-eece-474e-ab6c-4a09d1966eb4", 00:09:35.486 "assigned_rate_limits": { 00:09:35.486 "rw_ios_per_sec": 0, 00:09:35.486 "rw_mbytes_per_sec": 0, 00:09:35.486 "r_mbytes_per_sec": 0, 00:09:35.486 "w_mbytes_per_sec": 0 00:09:35.486 }, 00:09:35.486 "claimed": false, 00:09:35.486 "zoned": false, 00:09:35.486 "supported_io_types": { 00:09:35.486 "read": true, 00:09:35.486 "write": true, 00:09:35.486 "unmap": false, 00:09:35.486 "flush": false, 00:09:35.486 "reset": true, 00:09:35.486 "nvme_admin": false, 00:09:35.486 "nvme_io": false, 00:09:35.486 "nvme_io_md": false, 00:09:35.486 "write_zeroes": true, 00:09:35.486 "zcopy": false, 00:09:35.486 "get_zone_info": false, 00:09:35.486 "zone_management": false, 00:09:35.486 "zone_append": false, 00:09:35.486 "compare": false, 00:09:35.486 "compare_and_write": false, 00:09:35.486 "abort": false, 00:09:35.486 "seek_hole": false, 00:09:35.486 "seek_data": false, 00:09:35.486 "copy": false, 00:09:35.486 "nvme_iov_md": false 00:09:35.486 }, 00:09:35.486 "memory_domains": [ 00:09:35.486 { 00:09:35.486 "dma_device_id": "system", 00:09:35.486 "dma_device_type": 1 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.486 "dma_device_type": 2 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "system", 00:09:35.486 "dma_device_type": 1 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.486 "dma_device_type": 2 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "system", 00:09:35.486 "dma_device_type": 1 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.486 "dma_device_type": 2 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "system", 00:09:35.486 "dma_device_type": 1 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.486 "dma_device_type": 2 00:09:35.486 } 00:09:35.486 ], 00:09:35.486 "driver_specific": { 00:09:35.486 "raid": { 00:09:35.486 "uuid": "dbc9fc18-eece-474e-ab6c-4a09d1966eb4", 00:09:35.486 "strip_size_kb": 0, 00:09:35.486 "state": "online", 00:09:35.486 "raid_level": "raid1", 00:09:35.486 "superblock": false, 00:09:35.486 "num_base_bdevs": 4, 00:09:35.486 "num_base_bdevs_discovered": 4, 00:09:35.486 "num_base_bdevs_operational": 4, 00:09:35.486 "base_bdevs_list": [ 00:09:35.486 { 00:09:35.486 "name": "BaseBdev1", 00:09:35.486 "uuid": "6457ef9d-ec8c-49d0-b104-bbc81f8090cc", 00:09:35.486 "is_configured": true, 00:09:35.486 "data_offset": 0, 00:09:35.486 "data_size": 65536 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "name": "BaseBdev2", 00:09:35.486 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:35.486 "is_configured": true, 00:09:35.486 "data_offset": 0, 00:09:35.486 "data_size": 65536 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "name": "BaseBdev3", 00:09:35.486 "uuid": "414e5fb3-677c-4e81-96f3-1a8269346e8f", 00:09:35.486 "is_configured": true, 00:09:35.486 "data_offset": 0, 00:09:35.486 "data_size": 65536 00:09:35.486 }, 00:09:35.486 { 00:09:35.486 "name": "BaseBdev4", 00:09:35.486 "uuid": "108de820-7c06-4ec8-9788-2d77285cebf5", 00:09:35.486 "is_configured": true, 00:09:35.486 "data_offset": 0, 00:09:35.486 "data_size": 65536 00:09:35.486 } 00:09:35.486 ] 00:09:35.486 } 00:09:35.486 } 00:09:35.486 }' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.486 BaseBdev2 00:09:35.486 BaseBdev3 00:09:35.486 BaseBdev4' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.486 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 [2024-10-30 09:44:14.222498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.745 "name": "Existed_Raid", 00:09:35.745 "uuid": "dbc9fc18-eece-474e-ab6c-4a09d1966eb4", 00:09:35.745 "strip_size_kb": 0, 00:09:35.745 "state": "online", 00:09:35.745 "raid_level": "raid1", 00:09:35.745 "superblock": false, 00:09:35.745 "num_base_bdevs": 4, 00:09:35.745 "num_base_bdevs_discovered": 3, 00:09:35.745 "num_base_bdevs_operational": 3, 00:09:35.745 "base_bdevs_list": [ 00:09:35.745 { 00:09:35.745 "name": null, 00:09:35.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.745 "is_configured": false, 00:09:35.745 "data_offset": 0, 00:09:35.745 "data_size": 65536 00:09:35.745 }, 00:09:35.745 { 00:09:35.745 "name": "BaseBdev2", 00:09:35.745 "uuid": "549de8f1-e38c-4370-91d9-b85fc2460dec", 00:09:35.745 "is_configured": true, 00:09:35.745 "data_offset": 0, 00:09:35.745 "data_size": 65536 00:09:35.745 }, 00:09:35.745 { 00:09:35.745 "name": "BaseBdev3", 00:09:35.745 "uuid": "414e5fb3-677c-4e81-96f3-1a8269346e8f", 00:09:35.745 "is_configured": true, 00:09:35.745 "data_offset": 0, 00:09:35.745 "data_size": 65536 00:09:35.745 }, 00:09:35.745 { 00:09:35.745 "name": "BaseBdev4", 00:09:35.745 "uuid": "108de820-7c06-4ec8-9788-2d77285cebf5", 00:09:35.745 "is_configured": true, 00:09:35.745 "data_offset": 0, 00:09:35.745 "data_size": 65536 00:09:35.745 } 00:09:35.745 ] 00:09:35.745 }' 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.745 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.003 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.261 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.262 [2024-10-30 09:44:14.628311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.262 [2024-10-30 09:44:14.737508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.262 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.262 [2024-10-30 09:44:14.835613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:36.262 [2024-10-30 09:44:14.835695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.521 [2024-10-30 09:44:14.893755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.521 [2024-10-30 09:44:14.893795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.521 [2024-10-30 09:44:14.893807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.521 BaseBdev2 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:36.521 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 [ 00:09:36.522 { 00:09:36.522 "name": "BaseBdev2", 00:09:36.522 "aliases": [ 00:09:36.522 "ebd7c165-97a8-4ee1-846b-b8876a27d903" 00:09:36.522 ], 00:09:36.522 "product_name": "Malloc disk", 00:09:36.522 "block_size": 512, 00:09:36.522 "num_blocks": 65536, 00:09:36.522 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:36.522 "assigned_rate_limits": { 00:09:36.522 "rw_ios_per_sec": 0, 00:09:36.522 "rw_mbytes_per_sec": 0, 00:09:36.522 "r_mbytes_per_sec": 0, 00:09:36.522 "w_mbytes_per_sec": 0 00:09:36.522 }, 00:09:36.522 "claimed": false, 00:09:36.522 "zoned": false, 00:09:36.522 "supported_io_types": { 00:09:36.522 "read": true, 00:09:36.522 "write": true, 00:09:36.522 "unmap": true, 00:09:36.522 "flush": true, 00:09:36.522 "reset": true, 00:09:36.522 "nvme_admin": false, 00:09:36.522 "nvme_io": false, 00:09:36.522 "nvme_io_md": false, 00:09:36.522 "write_zeroes": true, 00:09:36.522 "zcopy": true, 00:09:36.522 "get_zone_info": false, 00:09:36.522 "zone_management": false, 00:09:36.522 "zone_append": false, 00:09:36.522 "compare": false, 00:09:36.522 "compare_and_write": false, 00:09:36.522 "abort": true, 00:09:36.522 "seek_hole": false, 00:09:36.522 "seek_data": false, 00:09:36.522 "copy": true, 00:09:36.522 "nvme_iov_md": false 00:09:36.522 }, 00:09:36.522 "memory_domains": [ 00:09:36.522 { 00:09:36.522 "dma_device_id": "system", 00:09:36.522 "dma_device_type": 1 00:09:36.522 }, 00:09:36.522 { 00:09:36.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.522 "dma_device_type": 2 00:09:36.522 } 00:09:36.522 ], 00:09:36.522 "driver_specific": {} 00:09:36.522 } 00:09:36.522 ] 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 BaseBdev3 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 [ 00:09:36.522 { 00:09:36.522 "name": "BaseBdev3", 00:09:36.522 "aliases": [ 00:09:36.522 "f6229e56-88a9-457b-a53b-d1ab101d16d6" 00:09:36.522 ], 00:09:36.522 "product_name": "Malloc disk", 00:09:36.522 "block_size": 512, 00:09:36.522 "num_blocks": 65536, 00:09:36.522 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:36.522 "assigned_rate_limits": { 00:09:36.522 "rw_ios_per_sec": 0, 00:09:36.522 "rw_mbytes_per_sec": 0, 00:09:36.522 "r_mbytes_per_sec": 0, 00:09:36.522 "w_mbytes_per_sec": 0 00:09:36.522 }, 00:09:36.522 "claimed": false, 00:09:36.522 "zoned": false, 00:09:36.522 "supported_io_types": { 00:09:36.522 "read": true, 00:09:36.522 "write": true, 00:09:36.522 "unmap": true, 00:09:36.522 "flush": true, 00:09:36.522 "reset": true, 00:09:36.522 "nvme_admin": false, 00:09:36.522 "nvme_io": false, 00:09:36.522 "nvme_io_md": false, 00:09:36.522 "write_zeroes": true, 00:09:36.522 "zcopy": true, 00:09:36.522 "get_zone_info": false, 00:09:36.522 "zone_management": false, 00:09:36.522 "zone_append": false, 00:09:36.522 "compare": false, 00:09:36.522 "compare_and_write": false, 00:09:36.522 "abort": true, 00:09:36.522 "seek_hole": false, 00:09:36.522 "seek_data": false, 00:09:36.522 "copy": true, 00:09:36.522 "nvme_iov_md": false 00:09:36.522 }, 00:09:36.522 "memory_domains": [ 00:09:36.522 { 00:09:36.522 "dma_device_id": "system", 00:09:36.522 "dma_device_type": 1 00:09:36.522 }, 00:09:36.522 { 00:09:36.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.522 "dma_device_type": 2 00:09:36.522 } 00:09:36.522 ], 00:09:36.522 "driver_specific": {} 00:09:36.522 } 00:09:36.522 ] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 BaseBdev4 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.522 [ 00:09:36.522 { 00:09:36.522 "name": "BaseBdev4", 00:09:36.522 "aliases": [ 00:09:36.522 "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86" 00:09:36.522 ], 00:09:36.522 "product_name": "Malloc disk", 00:09:36.522 "block_size": 512, 00:09:36.522 "num_blocks": 65536, 00:09:36.522 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:36.522 "assigned_rate_limits": { 00:09:36.522 "rw_ios_per_sec": 0, 00:09:36.522 "rw_mbytes_per_sec": 0, 00:09:36.522 "r_mbytes_per_sec": 0, 00:09:36.522 "w_mbytes_per_sec": 0 00:09:36.522 }, 00:09:36.522 "claimed": false, 00:09:36.522 "zoned": false, 00:09:36.522 "supported_io_types": { 00:09:36.522 "read": true, 00:09:36.522 "write": true, 00:09:36.522 "unmap": true, 00:09:36.522 "flush": true, 00:09:36.522 "reset": true, 00:09:36.522 "nvme_admin": false, 00:09:36.522 "nvme_io": false, 00:09:36.522 "nvme_io_md": false, 00:09:36.522 "write_zeroes": true, 00:09:36.522 "zcopy": true, 00:09:36.522 "get_zone_info": false, 00:09:36.522 "zone_management": false, 00:09:36.522 "zone_append": false, 00:09:36.522 "compare": false, 00:09:36.522 "compare_and_write": false, 00:09:36.522 "abort": true, 00:09:36.522 "seek_hole": false, 00:09:36.522 "seek_data": false, 00:09:36.522 "copy": true, 00:09:36.522 "nvme_iov_md": false 00:09:36.522 }, 00:09:36.522 "memory_domains": [ 00:09:36.522 { 00:09:36.522 "dma_device_id": "system", 00:09:36.522 "dma_device_type": 1 00:09:36.522 }, 00:09:36.522 { 00:09:36.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.522 "dma_device_type": 2 00:09:36.522 } 00:09:36.522 ], 00:09:36.522 "driver_specific": {} 00:09:36.522 } 00:09:36.522 ] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.522 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.523 [2024-10-30 09:44:15.093274] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.523 [2024-10-30 09:44:15.093322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.523 [2024-10-30 09:44:15.093345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.523 [2024-10-30 09:44:15.095152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.523 [2024-10-30 09:44:15.095201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.523 "name": "Existed_Raid", 00:09:36.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.523 "strip_size_kb": 0, 00:09:36.523 "state": "configuring", 00:09:36.523 "raid_level": "raid1", 00:09:36.523 "superblock": false, 00:09:36.523 "num_base_bdevs": 4, 00:09:36.523 "num_base_bdevs_discovered": 3, 00:09:36.523 "num_base_bdevs_operational": 4, 00:09:36.523 "base_bdevs_list": [ 00:09:36.523 { 00:09:36.523 "name": "BaseBdev1", 00:09:36.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.523 "is_configured": false, 00:09:36.523 "data_offset": 0, 00:09:36.523 "data_size": 0 00:09:36.523 }, 00:09:36.523 { 00:09:36.523 "name": "BaseBdev2", 00:09:36.523 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:36.523 "is_configured": true, 00:09:36.523 "data_offset": 0, 00:09:36.523 "data_size": 65536 00:09:36.523 }, 00:09:36.523 { 00:09:36.523 "name": "BaseBdev3", 00:09:36.523 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:36.523 "is_configured": true, 00:09:36.523 "data_offset": 0, 00:09:36.523 "data_size": 65536 00:09:36.523 }, 00:09:36.523 { 00:09:36.523 "name": "BaseBdev4", 00:09:36.523 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:36.523 "is_configured": true, 00:09:36.523 "data_offset": 0, 00:09:36.523 "data_size": 65536 00:09:36.523 } 00:09:36.523 ] 00:09:36.523 }' 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.523 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.780 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.780 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.780 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.038 [2024-10-30 09:44:15.401334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.038 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.039 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.039 "name": "Existed_Raid", 00:09:37.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.039 "strip_size_kb": 0, 00:09:37.039 "state": "configuring", 00:09:37.039 "raid_level": "raid1", 00:09:37.039 "superblock": false, 00:09:37.039 "num_base_bdevs": 4, 00:09:37.039 "num_base_bdevs_discovered": 2, 00:09:37.039 "num_base_bdevs_operational": 4, 00:09:37.039 "base_bdevs_list": [ 00:09:37.039 { 00:09:37.039 "name": "BaseBdev1", 00:09:37.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.039 "is_configured": false, 00:09:37.039 "data_offset": 0, 00:09:37.039 "data_size": 0 00:09:37.039 }, 00:09:37.039 { 00:09:37.039 "name": null, 00:09:37.039 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:37.039 "is_configured": false, 00:09:37.039 "data_offset": 0, 00:09:37.039 "data_size": 65536 00:09:37.039 }, 00:09:37.039 { 00:09:37.039 "name": "BaseBdev3", 00:09:37.039 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:37.039 "is_configured": true, 00:09:37.039 "data_offset": 0, 00:09:37.039 "data_size": 65536 00:09:37.039 }, 00:09:37.039 { 00:09:37.039 "name": "BaseBdev4", 00:09:37.039 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:37.039 "is_configured": true, 00:09:37.039 "data_offset": 0, 00:09:37.039 "data_size": 65536 00:09:37.039 } 00:09:37.039 ] 00:09:37.039 }' 00:09:37.039 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.039 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 [2024-10-30 09:44:15.775945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.297 BaseBdev1 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 [ 00:09:37.297 { 00:09:37.297 "name": "BaseBdev1", 00:09:37.297 "aliases": [ 00:09:37.297 "84079891-0cfc-4916-985c-341dcef163ad" 00:09:37.297 ], 00:09:37.297 "product_name": "Malloc disk", 00:09:37.297 "block_size": 512, 00:09:37.297 "num_blocks": 65536, 00:09:37.297 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:37.297 "assigned_rate_limits": { 00:09:37.297 "rw_ios_per_sec": 0, 00:09:37.297 "rw_mbytes_per_sec": 0, 00:09:37.297 "r_mbytes_per_sec": 0, 00:09:37.297 "w_mbytes_per_sec": 0 00:09:37.297 }, 00:09:37.297 "claimed": true, 00:09:37.297 "claim_type": "exclusive_write", 00:09:37.297 "zoned": false, 00:09:37.297 "supported_io_types": { 00:09:37.297 "read": true, 00:09:37.297 "write": true, 00:09:37.297 "unmap": true, 00:09:37.297 "flush": true, 00:09:37.297 "reset": true, 00:09:37.297 "nvme_admin": false, 00:09:37.297 "nvme_io": false, 00:09:37.297 "nvme_io_md": false, 00:09:37.297 "write_zeroes": true, 00:09:37.297 "zcopy": true, 00:09:37.297 "get_zone_info": false, 00:09:37.297 "zone_management": false, 00:09:37.297 "zone_append": false, 00:09:37.297 "compare": false, 00:09:37.297 "compare_and_write": false, 00:09:37.297 "abort": true, 00:09:37.297 "seek_hole": false, 00:09:37.297 "seek_data": false, 00:09:37.297 "copy": true, 00:09:37.297 "nvme_iov_md": false 00:09:37.297 }, 00:09:37.297 "memory_domains": [ 00:09:37.297 { 00:09:37.297 "dma_device_id": "system", 00:09:37.297 "dma_device_type": 1 00:09:37.297 }, 00:09:37.297 { 00:09:37.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.297 "dma_device_type": 2 00:09:37.297 } 00:09:37.297 ], 00:09:37.297 "driver_specific": {} 00:09:37.297 } 00:09:37.297 ] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.297 "name": "Existed_Raid", 00:09:37.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.297 "strip_size_kb": 0, 00:09:37.297 "state": "configuring", 00:09:37.297 "raid_level": "raid1", 00:09:37.297 "superblock": false, 00:09:37.297 "num_base_bdevs": 4, 00:09:37.297 "num_base_bdevs_discovered": 3, 00:09:37.297 "num_base_bdevs_operational": 4, 00:09:37.297 "base_bdevs_list": [ 00:09:37.297 { 00:09:37.297 "name": "BaseBdev1", 00:09:37.297 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:37.297 "is_configured": true, 00:09:37.297 "data_offset": 0, 00:09:37.297 "data_size": 65536 00:09:37.297 }, 00:09:37.297 { 00:09:37.297 "name": null, 00:09:37.297 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:37.297 "is_configured": false, 00:09:37.297 "data_offset": 0, 00:09:37.297 "data_size": 65536 00:09:37.297 }, 00:09:37.297 { 00:09:37.297 "name": "BaseBdev3", 00:09:37.297 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:37.297 "is_configured": true, 00:09:37.297 "data_offset": 0, 00:09:37.297 "data_size": 65536 00:09:37.297 }, 00:09:37.297 { 00:09:37.297 "name": "BaseBdev4", 00:09:37.297 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:37.297 "is_configured": true, 00:09:37.297 "data_offset": 0, 00:09:37.297 "data_size": 65536 00:09:37.298 } 00:09:37.298 ] 00:09:37.298 }' 00:09:37.298 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.298 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.555 [2024-10-30 09:44:16.140055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.555 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.556 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.556 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.556 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.556 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.814 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.814 "name": "Existed_Raid", 00:09:37.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.814 "strip_size_kb": 0, 00:09:37.814 "state": "configuring", 00:09:37.814 "raid_level": "raid1", 00:09:37.814 "superblock": false, 00:09:37.814 "num_base_bdevs": 4, 00:09:37.814 "num_base_bdevs_discovered": 2, 00:09:37.814 "num_base_bdevs_operational": 4, 00:09:37.814 "base_bdevs_list": [ 00:09:37.814 { 00:09:37.814 "name": "BaseBdev1", 00:09:37.814 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:37.814 "is_configured": true, 00:09:37.814 "data_offset": 0, 00:09:37.814 "data_size": 65536 00:09:37.814 }, 00:09:37.814 { 00:09:37.814 "name": null, 00:09:37.814 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:37.814 "is_configured": false, 00:09:37.814 "data_offset": 0, 00:09:37.814 "data_size": 65536 00:09:37.814 }, 00:09:37.814 { 00:09:37.814 "name": null, 00:09:37.814 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:37.814 "is_configured": false, 00:09:37.814 "data_offset": 0, 00:09:37.814 "data_size": 65536 00:09:37.814 }, 00:09:37.814 { 00:09:37.814 "name": "BaseBdev4", 00:09:37.814 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:37.814 "is_configured": true, 00:09:37.814 "data_offset": 0, 00:09:37.814 "data_size": 65536 00:09:37.814 } 00:09:37.814 ] 00:09:37.814 }' 00:09:37.814 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.814 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 [2024-10-30 09:44:16.492131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.072 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.072 "name": "Existed_Raid", 00:09:38.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.072 "strip_size_kb": 0, 00:09:38.072 "state": "configuring", 00:09:38.072 "raid_level": "raid1", 00:09:38.072 "superblock": false, 00:09:38.072 "num_base_bdevs": 4, 00:09:38.072 "num_base_bdevs_discovered": 3, 00:09:38.072 "num_base_bdevs_operational": 4, 00:09:38.072 "base_bdevs_list": [ 00:09:38.072 { 00:09:38.072 "name": "BaseBdev1", 00:09:38.072 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:38.072 "is_configured": true, 00:09:38.072 "data_offset": 0, 00:09:38.072 "data_size": 65536 00:09:38.072 }, 00:09:38.072 { 00:09:38.072 "name": null, 00:09:38.072 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:38.072 "is_configured": false, 00:09:38.072 "data_offset": 0, 00:09:38.072 "data_size": 65536 00:09:38.072 }, 00:09:38.072 { 00:09:38.072 "name": "BaseBdev3", 00:09:38.072 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:38.073 "is_configured": true, 00:09:38.073 "data_offset": 0, 00:09:38.073 "data_size": 65536 00:09:38.073 }, 00:09:38.073 { 00:09:38.073 "name": "BaseBdev4", 00:09:38.073 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:38.073 "is_configured": true, 00:09:38.073 "data_offset": 0, 00:09:38.073 "data_size": 65536 00:09:38.073 } 00:09:38.073 ] 00:09:38.073 }' 00:09:38.073 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.073 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 [2024-10-30 09:44:16.836226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.417 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.418 "name": "Existed_Raid", 00:09:38.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.418 "strip_size_kb": 0, 00:09:38.418 "state": "configuring", 00:09:38.418 "raid_level": "raid1", 00:09:38.418 "superblock": false, 00:09:38.418 "num_base_bdevs": 4, 00:09:38.418 "num_base_bdevs_discovered": 2, 00:09:38.418 "num_base_bdevs_operational": 4, 00:09:38.418 "base_bdevs_list": [ 00:09:38.418 { 00:09:38.418 "name": null, 00:09:38.418 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:38.418 "is_configured": false, 00:09:38.418 "data_offset": 0, 00:09:38.418 "data_size": 65536 00:09:38.418 }, 00:09:38.418 { 00:09:38.418 "name": null, 00:09:38.418 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:38.418 "is_configured": false, 00:09:38.418 "data_offset": 0, 00:09:38.418 "data_size": 65536 00:09:38.418 }, 00:09:38.418 { 00:09:38.418 "name": "BaseBdev3", 00:09:38.418 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:38.418 "is_configured": true, 00:09:38.418 "data_offset": 0, 00:09:38.418 "data_size": 65536 00:09:38.418 }, 00:09:38.418 { 00:09:38.418 "name": "BaseBdev4", 00:09:38.418 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:38.418 "is_configured": true, 00:09:38.418 "data_offset": 0, 00:09:38.418 "data_size": 65536 00:09:38.418 } 00:09:38.418 ] 00:09:38.418 }' 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.418 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.675 [2024-10-30 09:44:17.225939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.675 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.676 "name": "Existed_Raid", 00:09:38.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.676 "strip_size_kb": 0, 00:09:38.676 "state": "configuring", 00:09:38.676 "raid_level": "raid1", 00:09:38.676 "superblock": false, 00:09:38.676 "num_base_bdevs": 4, 00:09:38.676 "num_base_bdevs_discovered": 3, 00:09:38.676 "num_base_bdevs_operational": 4, 00:09:38.676 "base_bdevs_list": [ 00:09:38.676 { 00:09:38.676 "name": null, 00:09:38.676 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:38.676 "is_configured": false, 00:09:38.676 "data_offset": 0, 00:09:38.676 "data_size": 65536 00:09:38.676 }, 00:09:38.676 { 00:09:38.676 "name": "BaseBdev2", 00:09:38.676 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:38.676 "is_configured": true, 00:09:38.676 "data_offset": 0, 00:09:38.676 "data_size": 65536 00:09:38.676 }, 00:09:38.676 { 00:09:38.676 "name": "BaseBdev3", 00:09:38.676 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:38.676 "is_configured": true, 00:09:38.676 "data_offset": 0, 00:09:38.676 "data_size": 65536 00:09:38.676 }, 00:09:38.676 { 00:09:38.676 "name": "BaseBdev4", 00:09:38.676 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:38.676 "is_configured": true, 00:09:38.676 "data_offset": 0, 00:09:38.676 "data_size": 65536 00:09:38.676 } 00:09:38.676 ] 00:09:38.676 }' 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.676 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.933 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.933 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.933 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.933 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.933 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84079891-0cfc-4916-985c-341dcef163ad 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [2024-10-30 09:44:17.628039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.191 [2024-10-30 09:44:17.628088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.191 [2024-10-30 09:44:17.628096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:39.191 [2024-10-30 09:44:17.628299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:39.191 [2024-10-30 09:44:17.628408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.191 [2024-10-30 09:44:17.628421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.191 [2024-10-30 09:44:17.628583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.191 NewBaseBdev 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [ 00:09:39.191 { 00:09:39.191 "name": "NewBaseBdev", 00:09:39.191 "aliases": [ 00:09:39.191 "84079891-0cfc-4916-985c-341dcef163ad" 00:09:39.191 ], 00:09:39.191 "product_name": "Malloc disk", 00:09:39.191 "block_size": 512, 00:09:39.191 "num_blocks": 65536, 00:09:39.191 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:39.191 "assigned_rate_limits": { 00:09:39.191 "rw_ios_per_sec": 0, 00:09:39.191 "rw_mbytes_per_sec": 0, 00:09:39.191 "r_mbytes_per_sec": 0, 00:09:39.191 "w_mbytes_per_sec": 0 00:09:39.191 }, 00:09:39.191 "claimed": true, 00:09:39.191 "claim_type": "exclusive_write", 00:09:39.191 "zoned": false, 00:09:39.191 "supported_io_types": { 00:09:39.191 "read": true, 00:09:39.191 "write": true, 00:09:39.191 "unmap": true, 00:09:39.191 "flush": true, 00:09:39.191 "reset": true, 00:09:39.191 "nvme_admin": false, 00:09:39.191 "nvme_io": false, 00:09:39.192 "nvme_io_md": false, 00:09:39.192 "write_zeroes": true, 00:09:39.192 "zcopy": true, 00:09:39.192 "get_zone_info": false, 00:09:39.192 "zone_management": false, 00:09:39.192 "zone_append": false, 00:09:39.192 "compare": false, 00:09:39.192 "compare_and_write": false, 00:09:39.192 "abort": true, 00:09:39.192 "seek_hole": false, 00:09:39.192 "seek_data": false, 00:09:39.192 "copy": true, 00:09:39.192 "nvme_iov_md": false 00:09:39.192 }, 00:09:39.192 "memory_domains": [ 00:09:39.192 { 00:09:39.192 "dma_device_id": "system", 00:09:39.192 "dma_device_type": 1 00:09:39.192 }, 00:09:39.192 { 00:09:39.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.192 "dma_device_type": 2 00:09:39.192 } 00:09:39.192 ], 00:09:39.192 "driver_specific": {} 00:09:39.192 } 00:09:39.192 ] 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.192 "name": "Existed_Raid", 00:09:39.192 "uuid": "7816f87f-ff29-467b-9f2b-6557239712a7", 00:09:39.192 "strip_size_kb": 0, 00:09:39.192 "state": "online", 00:09:39.192 "raid_level": "raid1", 00:09:39.192 "superblock": false, 00:09:39.192 "num_base_bdevs": 4, 00:09:39.192 "num_base_bdevs_discovered": 4, 00:09:39.192 "num_base_bdevs_operational": 4, 00:09:39.192 "base_bdevs_list": [ 00:09:39.192 { 00:09:39.192 "name": "NewBaseBdev", 00:09:39.192 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:39.192 "is_configured": true, 00:09:39.192 "data_offset": 0, 00:09:39.192 "data_size": 65536 00:09:39.192 }, 00:09:39.192 { 00:09:39.192 "name": "BaseBdev2", 00:09:39.192 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:39.192 "is_configured": true, 00:09:39.192 "data_offset": 0, 00:09:39.192 "data_size": 65536 00:09:39.192 }, 00:09:39.192 { 00:09:39.192 "name": "BaseBdev3", 00:09:39.192 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:39.192 "is_configured": true, 00:09:39.192 "data_offset": 0, 00:09:39.192 "data_size": 65536 00:09:39.192 }, 00:09:39.192 { 00:09:39.192 "name": "BaseBdev4", 00:09:39.192 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:39.192 "is_configured": true, 00:09:39.192 "data_offset": 0, 00:09:39.192 "data_size": 65536 00:09:39.192 } 00:09:39.192 ] 00:09:39.192 }' 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.192 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 [2024-10-30 09:44:17.980463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.450 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.450 "name": "Existed_Raid", 00:09:39.450 "aliases": [ 00:09:39.450 "7816f87f-ff29-467b-9f2b-6557239712a7" 00:09:39.450 ], 00:09:39.450 "product_name": "Raid Volume", 00:09:39.450 "block_size": 512, 00:09:39.450 "num_blocks": 65536, 00:09:39.450 "uuid": "7816f87f-ff29-467b-9f2b-6557239712a7", 00:09:39.450 "assigned_rate_limits": { 00:09:39.450 "rw_ios_per_sec": 0, 00:09:39.450 "rw_mbytes_per_sec": 0, 00:09:39.450 "r_mbytes_per_sec": 0, 00:09:39.450 "w_mbytes_per_sec": 0 00:09:39.450 }, 00:09:39.450 "claimed": false, 00:09:39.450 "zoned": false, 00:09:39.450 "supported_io_types": { 00:09:39.450 "read": true, 00:09:39.450 "write": true, 00:09:39.450 "unmap": false, 00:09:39.450 "flush": false, 00:09:39.450 "reset": true, 00:09:39.450 "nvme_admin": false, 00:09:39.450 "nvme_io": false, 00:09:39.450 "nvme_io_md": false, 00:09:39.450 "write_zeroes": true, 00:09:39.450 "zcopy": false, 00:09:39.450 "get_zone_info": false, 00:09:39.450 "zone_management": false, 00:09:39.450 "zone_append": false, 00:09:39.450 "compare": false, 00:09:39.450 "compare_and_write": false, 00:09:39.450 "abort": false, 00:09:39.450 "seek_hole": false, 00:09:39.450 "seek_data": false, 00:09:39.450 "copy": false, 00:09:39.450 "nvme_iov_md": false 00:09:39.450 }, 00:09:39.450 "memory_domains": [ 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 } 00:09:39.450 ], 00:09:39.450 "driver_specific": { 00:09:39.450 "raid": { 00:09:39.450 "uuid": "7816f87f-ff29-467b-9f2b-6557239712a7", 00:09:39.450 "strip_size_kb": 0, 00:09:39.450 "state": "online", 00:09:39.450 "raid_level": "raid1", 00:09:39.450 "superblock": false, 00:09:39.450 "num_base_bdevs": 4, 00:09:39.450 "num_base_bdevs_discovered": 4, 00:09:39.450 "num_base_bdevs_operational": 4, 00:09:39.450 "base_bdevs_list": [ 00:09:39.450 { 00:09:39.450 "name": "NewBaseBdev", 00:09:39.450 "uuid": "84079891-0cfc-4916-985c-341dcef163ad", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev2", 00:09:39.450 "uuid": "ebd7c165-97a8-4ee1-846b-b8876a27d903", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev3", 00:09:39.450 "uuid": "f6229e56-88a9-457b-a53b-d1ab101d16d6", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev4", 00:09:39.450 "uuid": "c911f032-ac4c-4cd6-9cd5-0bd6ac6aca86", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 } 00:09:39.450 ] 00:09:39.450 } 00:09:39.450 } 00:09:39.450 }' 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.450 BaseBdev2 00:09:39.450 BaseBdev3 00:09:39.450 BaseBdev4' 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.450 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 [2024-10-30 09:44:18.208189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.707 [2024-10-30 09:44:18.208214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.707 [2024-10-30 09:44:18.208271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.707 [2024-10-30 09:44:18.208502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.707 [2024-10-30 09:44:18.208519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71349 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71349 ']' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71349 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71349 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:39.707 killing process with pid 71349 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71349' 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71349 00:09:39.707 [2024-10-30 09:44:18.235988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.707 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71349 00:09:39.986 [2024-10-30 09:44:18.424307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.551 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.551 00:09:40.551 real 0m7.936s 00:09:40.551 user 0m12.812s 00:09:40.551 sys 0m1.275s 00:09:40.551 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.551 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.551 ************************************ 00:09:40.551 END TEST raid_state_function_test 00:09:40.551 ************************************ 00:09:40.551 09:44:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:09:40.551 09:44:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:40.551 09:44:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.551 09:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.551 ************************************ 00:09:40.551 START TEST raid_state_function_test_sb 00:09:40.551 ************************************ 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.551 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71982 00:09:40.552 Process raid pid: 71982 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71982' 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71982 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71982 ']' 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.552 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:40.552 [2024-10-30 09:44:19.091780] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:40.552 [2024-10-30 09:44:19.091898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.810 [2024-10-30 09:44:19.246975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.810 [2024-10-30 09:44:19.329432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.068 [2024-10-30 09:44:19.439221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.068 [2024-10-30 09:44:19.439249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.389 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.389 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:41.389 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.389 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.389 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.690 [2024-10-30 09:44:19.957285] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.690 [2024-10-30 09:44:19.957328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.690 [2024-10-30 09:44:19.957337] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.690 [2024-10-30 09:44:19.957344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.690 [2024-10-30 09:44:19.957353] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.690 [2024-10-30 09:44:19.957361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.690 [2024-10-30 09:44:19.957366] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.690 [2024-10-30 09:44:19.957373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.690 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.691 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.691 09:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.691 "name": "Existed_Raid", 00:09:41.691 "uuid": "102e7bda-3a22-45dd-9148-b779c6f24082", 00:09:41.691 "strip_size_kb": 0, 00:09:41.691 "state": "configuring", 00:09:41.691 "raid_level": "raid1", 00:09:41.691 "superblock": true, 00:09:41.691 "num_base_bdevs": 4, 00:09:41.691 "num_base_bdevs_discovered": 0, 00:09:41.691 "num_base_bdevs_operational": 4, 00:09:41.691 "base_bdevs_list": [ 00:09:41.691 { 00:09:41.691 "name": "BaseBdev1", 00:09:41.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.691 "is_configured": false, 00:09:41.691 "data_offset": 0, 00:09:41.691 "data_size": 0 00:09:41.691 }, 00:09:41.691 { 00:09:41.691 "name": "BaseBdev2", 00:09:41.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.691 "is_configured": false, 00:09:41.691 "data_offset": 0, 00:09:41.691 "data_size": 0 00:09:41.691 }, 00:09:41.691 { 00:09:41.691 "name": "BaseBdev3", 00:09:41.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.691 "is_configured": false, 00:09:41.691 "data_offset": 0, 00:09:41.691 "data_size": 0 00:09:41.691 }, 00:09:41.691 { 00:09:41.691 "name": "BaseBdev4", 00:09:41.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.691 "is_configured": false, 00:09:41.691 "data_offset": 0, 00:09:41.691 "data_size": 0 00:09:41.691 } 00:09:41.691 ] 00:09:41.691 }' 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.691 [2024-10-30 09:44:20.285297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.691 [2024-10-30 09:44:20.285331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.691 [2024-10-30 09:44:20.293302] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.691 [2024-10-30 09:44:20.293334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.691 [2024-10-30 09:44:20.293341] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.691 [2024-10-30 09:44:20.293349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.691 [2024-10-30 09:44:20.293354] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.691 [2024-10-30 09:44:20.293361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.691 [2024-10-30 09:44:20.293366] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.691 [2024-10-30 09:44:20.293373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.691 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [2024-10-30 09:44:20.320926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.948 BaseBdev1 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [ 00:09:41.948 { 00:09:41.948 "name": "BaseBdev1", 00:09:41.948 "aliases": [ 00:09:41.948 "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8" 00:09:41.948 ], 00:09:41.948 "product_name": "Malloc disk", 00:09:41.948 "block_size": 512, 00:09:41.948 "num_blocks": 65536, 00:09:41.948 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:41.948 "assigned_rate_limits": { 00:09:41.948 "rw_ios_per_sec": 0, 00:09:41.948 "rw_mbytes_per_sec": 0, 00:09:41.948 "r_mbytes_per_sec": 0, 00:09:41.948 "w_mbytes_per_sec": 0 00:09:41.948 }, 00:09:41.948 "claimed": true, 00:09:41.948 "claim_type": "exclusive_write", 00:09:41.948 "zoned": false, 00:09:41.948 "supported_io_types": { 00:09:41.948 "read": true, 00:09:41.948 "write": true, 00:09:41.948 "unmap": true, 00:09:41.948 "flush": true, 00:09:41.948 "reset": true, 00:09:41.948 "nvme_admin": false, 00:09:41.948 "nvme_io": false, 00:09:41.948 "nvme_io_md": false, 00:09:41.948 "write_zeroes": true, 00:09:41.948 "zcopy": true, 00:09:41.948 "get_zone_info": false, 00:09:41.948 "zone_management": false, 00:09:41.948 "zone_append": false, 00:09:41.948 "compare": false, 00:09:41.948 "compare_and_write": false, 00:09:41.948 "abort": true, 00:09:41.948 "seek_hole": false, 00:09:41.948 "seek_data": false, 00:09:41.948 "copy": true, 00:09:41.948 "nvme_iov_md": false 00:09:41.948 }, 00:09:41.948 "memory_domains": [ 00:09:41.948 { 00:09:41.948 "dma_device_id": "system", 00:09:41.948 "dma_device_type": 1 00:09:41.948 }, 00:09:41.948 { 00:09:41.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.948 "dma_device_type": 2 00:09:41.948 } 00:09:41.948 ], 00:09:41.948 "driver_specific": {} 00:09:41.948 } 00:09:41.948 ] 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.948 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.949 "name": "Existed_Raid", 00:09:41.949 "uuid": "a4781533-873d-4de3-8334-eecaed1c48fe", 00:09:41.949 "strip_size_kb": 0, 00:09:41.949 "state": "configuring", 00:09:41.949 "raid_level": "raid1", 00:09:41.949 "superblock": true, 00:09:41.949 "num_base_bdevs": 4, 00:09:41.949 "num_base_bdevs_discovered": 1, 00:09:41.949 "num_base_bdevs_operational": 4, 00:09:41.949 "base_bdevs_list": [ 00:09:41.949 { 00:09:41.949 "name": "BaseBdev1", 00:09:41.949 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:41.949 "is_configured": true, 00:09:41.949 "data_offset": 2048, 00:09:41.949 "data_size": 63488 00:09:41.949 }, 00:09:41.949 { 00:09:41.949 "name": "BaseBdev2", 00:09:41.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.949 "is_configured": false, 00:09:41.949 "data_offset": 0, 00:09:41.949 "data_size": 0 00:09:41.949 }, 00:09:41.949 { 00:09:41.949 "name": "BaseBdev3", 00:09:41.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.949 "is_configured": false, 00:09:41.949 "data_offset": 0, 00:09:41.949 "data_size": 0 00:09:41.949 }, 00:09:41.949 { 00:09:41.949 "name": "BaseBdev4", 00:09:41.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.949 "is_configured": false, 00:09:41.949 "data_offset": 0, 00:09:41.949 "data_size": 0 00:09:41.949 } 00:09:41.949 ] 00:09:41.949 }' 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.949 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.205 [2024-10-30 09:44:20.665018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.205 [2024-10-30 09:44:20.665070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.205 [2024-10-30 09:44:20.673077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.205 [2024-10-30 09:44:20.674598] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.205 [2024-10-30 09:44:20.674634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.205 [2024-10-30 09:44:20.674642] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.205 [2024-10-30 09:44:20.674651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.205 [2024-10-30 09:44:20.674657] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:42.205 [2024-10-30 09:44:20.674664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.205 "name": "Existed_Raid", 00:09:42.205 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:42.205 "strip_size_kb": 0, 00:09:42.205 "state": "configuring", 00:09:42.205 "raid_level": "raid1", 00:09:42.205 "superblock": true, 00:09:42.205 "num_base_bdevs": 4, 00:09:42.205 "num_base_bdevs_discovered": 1, 00:09:42.205 "num_base_bdevs_operational": 4, 00:09:42.205 "base_bdevs_list": [ 00:09:42.205 { 00:09:42.205 "name": "BaseBdev1", 00:09:42.205 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:42.205 "is_configured": true, 00:09:42.205 "data_offset": 2048, 00:09:42.205 "data_size": 63488 00:09:42.205 }, 00:09:42.205 { 00:09:42.205 "name": "BaseBdev2", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.205 "is_configured": false, 00:09:42.205 "data_offset": 0, 00:09:42.205 "data_size": 0 00:09:42.205 }, 00:09:42.205 { 00:09:42.205 "name": "BaseBdev3", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.205 "is_configured": false, 00:09:42.205 "data_offset": 0, 00:09:42.205 "data_size": 0 00:09:42.205 }, 00:09:42.205 { 00:09:42.205 "name": "BaseBdev4", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.205 "is_configured": false, 00:09:42.205 "data_offset": 0, 00:09:42.205 "data_size": 0 00:09:42.205 } 00:09:42.205 ] 00:09:42.205 }' 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.205 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.462 09:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.462 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.462 09:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.462 [2024-10-30 09:44:21.003397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.462 BaseBdev2 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.462 [ 00:09:42.462 { 00:09:42.462 "name": "BaseBdev2", 00:09:42.462 "aliases": [ 00:09:42.462 "044f0e23-5d19-4ea9-93ae-1ad89f007136" 00:09:42.462 ], 00:09:42.462 "product_name": "Malloc disk", 00:09:42.462 "block_size": 512, 00:09:42.462 "num_blocks": 65536, 00:09:42.462 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:42.462 "assigned_rate_limits": { 00:09:42.462 "rw_ios_per_sec": 0, 00:09:42.462 "rw_mbytes_per_sec": 0, 00:09:42.462 "r_mbytes_per_sec": 0, 00:09:42.462 "w_mbytes_per_sec": 0 00:09:42.462 }, 00:09:42.462 "claimed": true, 00:09:42.462 "claim_type": "exclusive_write", 00:09:42.462 "zoned": false, 00:09:42.462 "supported_io_types": { 00:09:42.462 "read": true, 00:09:42.462 "write": true, 00:09:42.462 "unmap": true, 00:09:42.462 "flush": true, 00:09:42.462 "reset": true, 00:09:42.462 "nvme_admin": false, 00:09:42.462 "nvme_io": false, 00:09:42.462 "nvme_io_md": false, 00:09:42.462 "write_zeroes": true, 00:09:42.462 "zcopy": true, 00:09:42.462 "get_zone_info": false, 00:09:42.462 "zone_management": false, 00:09:42.462 "zone_append": false, 00:09:42.462 "compare": false, 00:09:42.462 "compare_and_write": false, 00:09:42.462 "abort": true, 00:09:42.462 "seek_hole": false, 00:09:42.462 "seek_data": false, 00:09:42.462 "copy": true, 00:09:42.462 "nvme_iov_md": false 00:09:42.462 }, 00:09:42.462 "memory_domains": [ 00:09:42.462 { 00:09:42.462 "dma_device_id": "system", 00:09:42.462 "dma_device_type": 1 00:09:42.462 }, 00:09:42.462 { 00:09:42.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.462 "dma_device_type": 2 00:09:42.462 } 00:09:42.462 ], 00:09:42.462 "driver_specific": {} 00:09:42.462 } 00:09:42.462 ] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.462 "name": "Existed_Raid", 00:09:42.462 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:42.462 "strip_size_kb": 0, 00:09:42.462 "state": "configuring", 00:09:42.462 "raid_level": "raid1", 00:09:42.462 "superblock": true, 00:09:42.462 "num_base_bdevs": 4, 00:09:42.462 "num_base_bdevs_discovered": 2, 00:09:42.462 "num_base_bdevs_operational": 4, 00:09:42.462 "base_bdevs_list": [ 00:09:42.462 { 00:09:42.462 "name": "BaseBdev1", 00:09:42.462 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:42.462 "is_configured": true, 00:09:42.462 "data_offset": 2048, 00:09:42.462 "data_size": 63488 00:09:42.462 }, 00:09:42.462 { 00:09:42.462 "name": "BaseBdev2", 00:09:42.462 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:42.462 "is_configured": true, 00:09:42.462 "data_offset": 2048, 00:09:42.462 "data_size": 63488 00:09:42.462 }, 00:09:42.462 { 00:09:42.462 "name": "BaseBdev3", 00:09:42.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.462 "is_configured": false, 00:09:42.462 "data_offset": 0, 00:09:42.462 "data_size": 0 00:09:42.462 }, 00:09:42.462 { 00:09:42.462 "name": "BaseBdev4", 00:09:42.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.462 "is_configured": false, 00:09:42.462 "data_offset": 0, 00:09:42.462 "data_size": 0 00:09:42.462 } 00:09:42.462 ] 00:09:42.462 }' 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.462 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.029 [2024-10-30 09:44:21.390003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.029 BaseBdev3 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.029 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.029 [ 00:09:43.029 { 00:09:43.029 "name": "BaseBdev3", 00:09:43.029 "aliases": [ 00:09:43.029 "d2f22bd0-7d3c-431a-b9fd-d231becf95b5" 00:09:43.029 ], 00:09:43.029 "product_name": "Malloc disk", 00:09:43.029 "block_size": 512, 00:09:43.029 "num_blocks": 65536, 00:09:43.029 "uuid": "d2f22bd0-7d3c-431a-b9fd-d231becf95b5", 00:09:43.029 "assigned_rate_limits": { 00:09:43.029 "rw_ios_per_sec": 0, 00:09:43.029 "rw_mbytes_per_sec": 0, 00:09:43.029 "r_mbytes_per_sec": 0, 00:09:43.029 "w_mbytes_per_sec": 0 00:09:43.029 }, 00:09:43.029 "claimed": true, 00:09:43.029 "claim_type": "exclusive_write", 00:09:43.029 "zoned": false, 00:09:43.029 "supported_io_types": { 00:09:43.029 "read": true, 00:09:43.029 "write": true, 00:09:43.029 "unmap": true, 00:09:43.029 "flush": true, 00:09:43.029 "reset": true, 00:09:43.029 "nvme_admin": false, 00:09:43.029 "nvme_io": false, 00:09:43.029 "nvme_io_md": false, 00:09:43.029 "write_zeroes": true, 00:09:43.029 "zcopy": true, 00:09:43.029 "get_zone_info": false, 00:09:43.029 "zone_management": false, 00:09:43.029 "zone_append": false, 00:09:43.029 "compare": false, 00:09:43.030 "compare_and_write": false, 00:09:43.030 "abort": true, 00:09:43.030 "seek_hole": false, 00:09:43.030 "seek_data": false, 00:09:43.030 "copy": true, 00:09:43.030 "nvme_iov_md": false 00:09:43.030 }, 00:09:43.030 "memory_domains": [ 00:09:43.030 { 00:09:43.030 "dma_device_id": "system", 00:09:43.030 "dma_device_type": 1 00:09:43.030 }, 00:09:43.030 { 00:09:43.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.030 "dma_device_type": 2 00:09:43.030 } 00:09:43.030 ], 00:09:43.030 "driver_specific": {} 00:09:43.030 } 00:09:43.030 ] 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.030 "name": "Existed_Raid", 00:09:43.030 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:43.030 "strip_size_kb": 0, 00:09:43.030 "state": "configuring", 00:09:43.030 "raid_level": "raid1", 00:09:43.030 "superblock": true, 00:09:43.030 "num_base_bdevs": 4, 00:09:43.030 "num_base_bdevs_discovered": 3, 00:09:43.030 "num_base_bdevs_operational": 4, 00:09:43.030 "base_bdevs_list": [ 00:09:43.030 { 00:09:43.030 "name": "BaseBdev1", 00:09:43.030 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:43.030 "is_configured": true, 00:09:43.030 "data_offset": 2048, 00:09:43.030 "data_size": 63488 00:09:43.030 }, 00:09:43.030 { 00:09:43.030 "name": "BaseBdev2", 00:09:43.030 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:43.030 "is_configured": true, 00:09:43.030 "data_offset": 2048, 00:09:43.030 "data_size": 63488 00:09:43.030 }, 00:09:43.030 { 00:09:43.030 "name": "BaseBdev3", 00:09:43.030 "uuid": "d2f22bd0-7d3c-431a-b9fd-d231becf95b5", 00:09:43.030 "is_configured": true, 00:09:43.030 "data_offset": 2048, 00:09:43.030 "data_size": 63488 00:09:43.030 }, 00:09:43.030 { 00:09:43.030 "name": "BaseBdev4", 00:09:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.030 "is_configured": false, 00:09:43.030 "data_offset": 0, 00:09:43.030 "data_size": 0 00:09:43.030 } 00:09:43.030 ] 00:09:43.030 }' 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.030 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.288 [2024-10-30 09:44:21.748044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:43.288 [2024-10-30 09:44:21.748246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.288 [2024-10-30 09:44:21.748258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.288 BaseBdev4 00:09:43.288 [2024-10-30 09:44:21.748471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:43.288 [2024-10-30 09:44:21.748588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.288 [2024-10-30 09:44:21.748599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.288 [2024-10-30 09:44:21.748703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.288 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.288 [ 00:09:43.288 { 00:09:43.288 "name": "BaseBdev4", 00:09:43.288 "aliases": [ 00:09:43.288 "c923c460-941e-4cc2-a483-0d13e65b623e" 00:09:43.288 ], 00:09:43.288 "product_name": "Malloc disk", 00:09:43.288 "block_size": 512, 00:09:43.288 "num_blocks": 65536, 00:09:43.288 "uuid": "c923c460-941e-4cc2-a483-0d13e65b623e", 00:09:43.288 "assigned_rate_limits": { 00:09:43.288 "rw_ios_per_sec": 0, 00:09:43.288 "rw_mbytes_per_sec": 0, 00:09:43.288 "r_mbytes_per_sec": 0, 00:09:43.288 "w_mbytes_per_sec": 0 00:09:43.288 }, 00:09:43.288 "claimed": true, 00:09:43.288 "claim_type": "exclusive_write", 00:09:43.288 "zoned": false, 00:09:43.288 "supported_io_types": { 00:09:43.288 "read": true, 00:09:43.288 "write": true, 00:09:43.289 "unmap": true, 00:09:43.289 "flush": true, 00:09:43.289 "reset": true, 00:09:43.289 "nvme_admin": false, 00:09:43.289 "nvme_io": false, 00:09:43.289 "nvme_io_md": false, 00:09:43.289 "write_zeroes": true, 00:09:43.289 "zcopy": true, 00:09:43.289 "get_zone_info": false, 00:09:43.289 "zone_management": false, 00:09:43.289 "zone_append": false, 00:09:43.289 "compare": false, 00:09:43.289 "compare_and_write": false, 00:09:43.289 "abort": true, 00:09:43.289 "seek_hole": false, 00:09:43.289 "seek_data": false, 00:09:43.289 "copy": true, 00:09:43.289 "nvme_iov_md": false 00:09:43.289 }, 00:09:43.289 "memory_domains": [ 00:09:43.289 { 00:09:43.289 "dma_device_id": "system", 00:09:43.289 "dma_device_type": 1 00:09:43.289 }, 00:09:43.289 { 00:09:43.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.289 "dma_device_type": 2 00:09:43.289 } 00:09:43.289 ], 00:09:43.289 "driver_specific": {} 00:09:43.289 } 00:09:43.289 ] 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.289 "name": "Existed_Raid", 00:09:43.289 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:43.289 "strip_size_kb": 0, 00:09:43.289 "state": "online", 00:09:43.289 "raid_level": "raid1", 00:09:43.289 "superblock": true, 00:09:43.289 "num_base_bdevs": 4, 00:09:43.289 "num_base_bdevs_discovered": 4, 00:09:43.289 "num_base_bdevs_operational": 4, 00:09:43.289 "base_bdevs_list": [ 00:09:43.289 { 00:09:43.289 "name": "BaseBdev1", 00:09:43.289 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:43.289 "is_configured": true, 00:09:43.289 "data_offset": 2048, 00:09:43.289 "data_size": 63488 00:09:43.289 }, 00:09:43.289 { 00:09:43.289 "name": "BaseBdev2", 00:09:43.289 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:43.289 "is_configured": true, 00:09:43.289 "data_offset": 2048, 00:09:43.289 "data_size": 63488 00:09:43.289 }, 00:09:43.289 { 00:09:43.289 "name": "BaseBdev3", 00:09:43.289 "uuid": "d2f22bd0-7d3c-431a-b9fd-d231becf95b5", 00:09:43.289 "is_configured": true, 00:09:43.289 "data_offset": 2048, 00:09:43.289 "data_size": 63488 00:09:43.289 }, 00:09:43.289 { 00:09:43.289 "name": "BaseBdev4", 00:09:43.289 "uuid": "c923c460-941e-4cc2-a483-0d13e65b623e", 00:09:43.289 "is_configured": true, 00:09:43.289 "data_offset": 2048, 00:09:43.289 "data_size": 63488 00:09:43.289 } 00:09:43.289 ] 00:09:43.289 }' 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.289 09:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.548 [2024-10-30 09:44:22.096451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.548 "name": "Existed_Raid", 00:09:43.548 "aliases": [ 00:09:43.548 "3df87310-3ed4-4ad0-ac97-307bdcfdd900" 00:09:43.548 ], 00:09:43.548 "product_name": "Raid Volume", 00:09:43.548 "block_size": 512, 00:09:43.548 "num_blocks": 63488, 00:09:43.548 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:43.548 "assigned_rate_limits": { 00:09:43.548 "rw_ios_per_sec": 0, 00:09:43.548 "rw_mbytes_per_sec": 0, 00:09:43.548 "r_mbytes_per_sec": 0, 00:09:43.548 "w_mbytes_per_sec": 0 00:09:43.548 }, 00:09:43.548 "claimed": false, 00:09:43.548 "zoned": false, 00:09:43.548 "supported_io_types": { 00:09:43.548 "read": true, 00:09:43.548 "write": true, 00:09:43.548 "unmap": false, 00:09:43.548 "flush": false, 00:09:43.548 "reset": true, 00:09:43.548 "nvme_admin": false, 00:09:43.548 "nvme_io": false, 00:09:43.548 "nvme_io_md": false, 00:09:43.548 "write_zeroes": true, 00:09:43.548 "zcopy": false, 00:09:43.548 "get_zone_info": false, 00:09:43.548 "zone_management": false, 00:09:43.548 "zone_append": false, 00:09:43.548 "compare": false, 00:09:43.548 "compare_and_write": false, 00:09:43.548 "abort": false, 00:09:43.548 "seek_hole": false, 00:09:43.548 "seek_data": false, 00:09:43.548 "copy": false, 00:09:43.548 "nvme_iov_md": false 00:09:43.548 }, 00:09:43.548 "memory_domains": [ 00:09:43.548 { 00:09:43.548 "dma_device_id": "system", 00:09:43.548 "dma_device_type": 1 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.548 "dma_device_type": 2 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "system", 00:09:43.548 "dma_device_type": 1 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.548 "dma_device_type": 2 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "system", 00:09:43.548 "dma_device_type": 1 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.548 "dma_device_type": 2 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "system", 00:09:43.548 "dma_device_type": 1 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.548 "dma_device_type": 2 00:09:43.548 } 00:09:43.548 ], 00:09:43.548 "driver_specific": { 00:09:43.548 "raid": { 00:09:43.548 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:43.548 "strip_size_kb": 0, 00:09:43.548 "state": "online", 00:09:43.548 "raid_level": "raid1", 00:09:43.548 "superblock": true, 00:09:43.548 "num_base_bdevs": 4, 00:09:43.548 "num_base_bdevs_discovered": 4, 00:09:43.548 "num_base_bdevs_operational": 4, 00:09:43.548 "base_bdevs_list": [ 00:09:43.548 { 00:09:43.548 "name": "BaseBdev1", 00:09:43.548 "uuid": "77c0b0d4-9462-4d84-a668-e9b1d6d0c6a8", 00:09:43.548 "is_configured": true, 00:09:43.548 "data_offset": 2048, 00:09:43.548 "data_size": 63488 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "name": "BaseBdev2", 00:09:43.548 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:43.548 "is_configured": true, 00:09:43.548 "data_offset": 2048, 00:09:43.548 "data_size": 63488 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "name": "BaseBdev3", 00:09:43.548 "uuid": "d2f22bd0-7d3c-431a-b9fd-d231becf95b5", 00:09:43.548 "is_configured": true, 00:09:43.548 "data_offset": 2048, 00:09:43.548 "data_size": 63488 00:09:43.548 }, 00:09:43.548 { 00:09:43.548 "name": "BaseBdev4", 00:09:43.548 "uuid": "c923c460-941e-4cc2-a483-0d13e65b623e", 00:09:43.548 "is_configured": true, 00:09:43.548 "data_offset": 2048, 00:09:43.548 "data_size": 63488 00:09:43.548 } 00:09:43.548 ] 00:09:43.548 } 00:09:43.548 } 00:09:43.548 }' 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:43.548 BaseBdev2 00:09:43.548 BaseBdev3 00:09:43.548 BaseBdev4' 00:09:43.548 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.806 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.806 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.806 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:43.806 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.806 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 [2024-10-30 09:44:22.316249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.807 "name": "Existed_Raid", 00:09:43.807 "uuid": "3df87310-3ed4-4ad0-ac97-307bdcfdd900", 00:09:43.807 "strip_size_kb": 0, 00:09:43.807 "state": "online", 00:09:43.807 "raid_level": "raid1", 00:09:43.807 "superblock": true, 00:09:43.807 "num_base_bdevs": 4, 00:09:43.807 "num_base_bdevs_discovered": 3, 00:09:43.807 "num_base_bdevs_operational": 3, 00:09:43.807 "base_bdevs_list": [ 00:09:43.807 { 00:09:43.807 "name": null, 00:09:43.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.807 "is_configured": false, 00:09:43.807 "data_offset": 0, 00:09:43.807 "data_size": 63488 00:09:43.807 }, 00:09:43.807 { 00:09:43.807 "name": "BaseBdev2", 00:09:43.807 "uuid": "044f0e23-5d19-4ea9-93ae-1ad89f007136", 00:09:43.807 "is_configured": true, 00:09:43.807 "data_offset": 2048, 00:09:43.807 "data_size": 63488 00:09:43.807 }, 00:09:43.807 { 00:09:43.807 "name": "BaseBdev3", 00:09:43.807 "uuid": "d2f22bd0-7d3c-431a-b9fd-d231becf95b5", 00:09:43.807 "is_configured": true, 00:09:43.807 "data_offset": 2048, 00:09:43.807 "data_size": 63488 00:09:43.807 }, 00:09:43.807 { 00:09:43.807 "name": "BaseBdev4", 00:09:43.807 "uuid": "c923c460-941e-4cc2-a483-0d13e65b623e", 00:09:43.807 "is_configured": true, 00:09:43.807 "data_offset": 2048, 00:09:43.807 "data_size": 63488 00:09:43.807 } 00:09:43.807 ] 00:09:43.807 }' 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.807 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 [2024-10-30 09:44:22.726064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 [2024-10-30 09:44:22.807936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.372 [2024-10-30 09:44:22.894287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:44.372 [2024-10-30 09:44:22.894366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.372 [2024-10-30 09:44:22.941496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.372 [2024-10-30 09:44:22.941538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.372 [2024-10-30 09:44:22.941547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.372 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.373 09:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.631 BaseBdev2 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.631 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.631 [ 00:09:44.631 { 00:09:44.631 "name": "BaseBdev2", 00:09:44.631 "aliases": [ 00:09:44.631 "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669" 00:09:44.631 ], 00:09:44.631 "product_name": "Malloc disk", 00:09:44.631 "block_size": 512, 00:09:44.631 "num_blocks": 65536, 00:09:44.631 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:44.631 "assigned_rate_limits": { 00:09:44.632 "rw_ios_per_sec": 0, 00:09:44.632 "rw_mbytes_per_sec": 0, 00:09:44.632 "r_mbytes_per_sec": 0, 00:09:44.632 "w_mbytes_per_sec": 0 00:09:44.632 }, 00:09:44.632 "claimed": false, 00:09:44.632 "zoned": false, 00:09:44.632 "supported_io_types": { 00:09:44.632 "read": true, 00:09:44.632 "write": true, 00:09:44.632 "unmap": true, 00:09:44.632 "flush": true, 00:09:44.632 "reset": true, 00:09:44.632 "nvme_admin": false, 00:09:44.632 "nvme_io": false, 00:09:44.632 "nvme_io_md": false, 00:09:44.632 "write_zeroes": true, 00:09:44.632 "zcopy": true, 00:09:44.632 "get_zone_info": false, 00:09:44.632 "zone_management": false, 00:09:44.632 "zone_append": false, 00:09:44.632 "compare": false, 00:09:44.632 "compare_and_write": false, 00:09:44.632 "abort": true, 00:09:44.632 "seek_hole": false, 00:09:44.632 "seek_data": false, 00:09:44.632 "copy": true, 00:09:44.632 "nvme_iov_md": false 00:09:44.632 }, 00:09:44.632 "memory_domains": [ 00:09:44.632 { 00:09:44.632 "dma_device_id": "system", 00:09:44.632 "dma_device_type": 1 00:09:44.632 }, 00:09:44.632 { 00:09:44.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.632 "dma_device_type": 2 00:09:44.632 } 00:09:44.632 ], 00:09:44.632 "driver_specific": {} 00:09:44.632 } 00:09:44.632 ] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 BaseBdev3 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 [ 00:09:44.632 { 00:09:44.632 "name": "BaseBdev3", 00:09:44.632 "aliases": [ 00:09:44.632 "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc" 00:09:44.632 ], 00:09:44.632 "product_name": "Malloc disk", 00:09:44.632 "block_size": 512, 00:09:44.632 "num_blocks": 65536, 00:09:44.632 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:44.632 "assigned_rate_limits": { 00:09:44.632 "rw_ios_per_sec": 0, 00:09:44.632 "rw_mbytes_per_sec": 0, 00:09:44.632 "r_mbytes_per_sec": 0, 00:09:44.632 "w_mbytes_per_sec": 0 00:09:44.632 }, 00:09:44.632 "claimed": false, 00:09:44.632 "zoned": false, 00:09:44.632 "supported_io_types": { 00:09:44.632 "read": true, 00:09:44.632 "write": true, 00:09:44.632 "unmap": true, 00:09:44.632 "flush": true, 00:09:44.632 "reset": true, 00:09:44.632 "nvme_admin": false, 00:09:44.632 "nvme_io": false, 00:09:44.632 "nvme_io_md": false, 00:09:44.632 "write_zeroes": true, 00:09:44.632 "zcopy": true, 00:09:44.632 "get_zone_info": false, 00:09:44.632 "zone_management": false, 00:09:44.632 "zone_append": false, 00:09:44.632 "compare": false, 00:09:44.632 "compare_and_write": false, 00:09:44.632 "abort": true, 00:09:44.632 "seek_hole": false, 00:09:44.632 "seek_data": false, 00:09:44.632 "copy": true, 00:09:44.632 "nvme_iov_md": false 00:09:44.632 }, 00:09:44.632 "memory_domains": [ 00:09:44.632 { 00:09:44.632 "dma_device_id": "system", 00:09:44.632 "dma_device_type": 1 00:09:44.632 }, 00:09:44.632 { 00:09:44.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.632 "dma_device_type": 2 00:09:44.632 } 00:09:44.632 ], 00:09:44.632 "driver_specific": {} 00:09:44.632 } 00:09:44.632 ] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 BaseBdev4 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 [ 00:09:44.632 { 00:09:44.632 "name": "BaseBdev4", 00:09:44.632 "aliases": [ 00:09:44.632 "fe2c32ec-3b54-42bb-a460-2f313864b4a5" 00:09:44.632 ], 00:09:44.632 "product_name": "Malloc disk", 00:09:44.632 "block_size": 512, 00:09:44.632 "num_blocks": 65536, 00:09:44.632 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:44.632 "assigned_rate_limits": { 00:09:44.632 "rw_ios_per_sec": 0, 00:09:44.632 "rw_mbytes_per_sec": 0, 00:09:44.632 "r_mbytes_per_sec": 0, 00:09:44.632 "w_mbytes_per_sec": 0 00:09:44.632 }, 00:09:44.632 "claimed": false, 00:09:44.632 "zoned": false, 00:09:44.632 "supported_io_types": { 00:09:44.632 "read": true, 00:09:44.632 "write": true, 00:09:44.632 "unmap": true, 00:09:44.632 "flush": true, 00:09:44.632 "reset": true, 00:09:44.632 "nvme_admin": false, 00:09:44.632 "nvme_io": false, 00:09:44.632 "nvme_io_md": false, 00:09:44.632 "write_zeroes": true, 00:09:44.632 "zcopy": true, 00:09:44.632 "get_zone_info": false, 00:09:44.632 "zone_management": false, 00:09:44.632 "zone_append": false, 00:09:44.632 "compare": false, 00:09:44.632 "compare_and_write": false, 00:09:44.632 "abort": true, 00:09:44.632 "seek_hole": false, 00:09:44.632 "seek_data": false, 00:09:44.632 "copy": true, 00:09:44.632 "nvme_iov_md": false 00:09:44.632 }, 00:09:44.632 "memory_domains": [ 00:09:44.632 { 00:09:44.632 "dma_device_id": "system", 00:09:44.632 "dma_device_type": 1 00:09:44.632 }, 00:09:44.632 { 00:09:44.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.632 "dma_device_type": 2 00:09:44.632 } 00:09:44.632 ], 00:09:44.632 "driver_specific": {} 00:09:44.632 } 00:09:44.632 ] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 [2024-10-30 09:44:23.127383] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.632 [2024-10-30 09:44:23.127425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.632 [2024-10-30 09:44:23.127440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.632 [2024-10-30 09:44:23.128928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.632 [2024-10-30 09:44:23.128969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:44.632 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.633 "name": "Existed_Raid", 00:09:44.633 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:44.633 "strip_size_kb": 0, 00:09:44.633 "state": "configuring", 00:09:44.633 "raid_level": "raid1", 00:09:44.633 "superblock": true, 00:09:44.633 "num_base_bdevs": 4, 00:09:44.633 "num_base_bdevs_discovered": 3, 00:09:44.633 "num_base_bdevs_operational": 4, 00:09:44.633 "base_bdevs_list": [ 00:09:44.633 { 00:09:44.633 "name": "BaseBdev1", 00:09:44.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.633 "is_configured": false, 00:09:44.633 "data_offset": 0, 00:09:44.633 "data_size": 0 00:09:44.633 }, 00:09:44.633 { 00:09:44.633 "name": "BaseBdev2", 00:09:44.633 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:44.633 "is_configured": true, 00:09:44.633 "data_offset": 2048, 00:09:44.633 "data_size": 63488 00:09:44.633 }, 00:09:44.633 { 00:09:44.633 "name": "BaseBdev3", 00:09:44.633 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:44.633 "is_configured": true, 00:09:44.633 "data_offset": 2048, 00:09:44.633 "data_size": 63488 00:09:44.633 }, 00:09:44.633 { 00:09:44.633 "name": "BaseBdev4", 00:09:44.633 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:44.633 "is_configured": true, 00:09:44.633 "data_offset": 2048, 00:09:44.633 "data_size": 63488 00:09:44.633 } 00:09:44.633 ] 00:09:44.633 }' 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.633 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 [2024-10-30 09:44:23.431456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.892 "name": "Existed_Raid", 00:09:44.892 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:44.892 "strip_size_kb": 0, 00:09:44.892 "state": "configuring", 00:09:44.892 "raid_level": "raid1", 00:09:44.892 "superblock": true, 00:09:44.892 "num_base_bdevs": 4, 00:09:44.892 "num_base_bdevs_discovered": 2, 00:09:44.892 "num_base_bdevs_operational": 4, 00:09:44.892 "base_bdevs_list": [ 00:09:44.892 { 00:09:44.892 "name": "BaseBdev1", 00:09:44.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.892 "is_configured": false, 00:09:44.892 "data_offset": 0, 00:09:44.892 "data_size": 0 00:09:44.892 }, 00:09:44.892 { 00:09:44.892 "name": null, 00:09:44.892 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:44.892 "is_configured": false, 00:09:44.892 "data_offset": 0, 00:09:44.892 "data_size": 63488 00:09:44.892 }, 00:09:44.892 { 00:09:44.892 "name": "BaseBdev3", 00:09:44.892 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:44.892 "is_configured": true, 00:09:44.892 "data_offset": 2048, 00:09:44.892 "data_size": 63488 00:09:44.892 }, 00:09:44.892 { 00:09:44.892 "name": "BaseBdev4", 00:09:44.892 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:44.892 "is_configured": true, 00:09:44.892 "data_offset": 2048, 00:09:44.892 "data_size": 63488 00:09:44.892 } 00:09:44.892 ] 00:09:44.892 }' 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.892 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.178 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.178 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.178 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.178 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.178 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.468 [2024-10-30 09:44:23.805495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.468 BaseBdev1 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.468 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.468 [ 00:09:45.468 { 00:09:45.468 "name": "BaseBdev1", 00:09:45.468 "aliases": [ 00:09:45.468 "d27216ee-89e0-4e05-bb4a-6364c64e2cc1" 00:09:45.468 ], 00:09:45.468 "product_name": "Malloc disk", 00:09:45.468 "block_size": 512, 00:09:45.468 "num_blocks": 65536, 00:09:45.468 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:45.468 "assigned_rate_limits": { 00:09:45.468 "rw_ios_per_sec": 0, 00:09:45.468 "rw_mbytes_per_sec": 0, 00:09:45.468 "r_mbytes_per_sec": 0, 00:09:45.468 "w_mbytes_per_sec": 0 00:09:45.468 }, 00:09:45.468 "claimed": true, 00:09:45.468 "claim_type": "exclusive_write", 00:09:45.468 "zoned": false, 00:09:45.468 "supported_io_types": { 00:09:45.468 "read": true, 00:09:45.468 "write": true, 00:09:45.468 "unmap": true, 00:09:45.468 "flush": true, 00:09:45.468 "reset": true, 00:09:45.468 "nvme_admin": false, 00:09:45.468 "nvme_io": false, 00:09:45.468 "nvme_io_md": false, 00:09:45.468 "write_zeroes": true, 00:09:45.468 "zcopy": true, 00:09:45.468 "get_zone_info": false, 00:09:45.468 "zone_management": false, 00:09:45.468 "zone_append": false, 00:09:45.468 "compare": false, 00:09:45.468 "compare_and_write": false, 00:09:45.468 "abort": true, 00:09:45.468 "seek_hole": false, 00:09:45.468 "seek_data": false, 00:09:45.468 "copy": true, 00:09:45.468 "nvme_iov_md": false 00:09:45.469 }, 00:09:45.469 "memory_domains": [ 00:09:45.469 { 00:09:45.469 "dma_device_id": "system", 00:09:45.469 "dma_device_type": 1 00:09:45.469 }, 00:09:45.469 { 00:09:45.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.469 "dma_device_type": 2 00:09:45.469 } 00:09:45.469 ], 00:09:45.469 "driver_specific": {} 00:09:45.469 } 00:09:45.469 ] 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.469 "name": "Existed_Raid", 00:09:45.469 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:45.469 "strip_size_kb": 0, 00:09:45.469 "state": "configuring", 00:09:45.469 "raid_level": "raid1", 00:09:45.469 "superblock": true, 00:09:45.469 "num_base_bdevs": 4, 00:09:45.469 "num_base_bdevs_discovered": 3, 00:09:45.469 "num_base_bdevs_operational": 4, 00:09:45.469 "base_bdevs_list": [ 00:09:45.469 { 00:09:45.469 "name": "BaseBdev1", 00:09:45.469 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:45.469 "is_configured": true, 00:09:45.469 "data_offset": 2048, 00:09:45.469 "data_size": 63488 00:09:45.469 }, 00:09:45.469 { 00:09:45.469 "name": null, 00:09:45.469 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:45.469 "is_configured": false, 00:09:45.469 "data_offset": 0, 00:09:45.469 "data_size": 63488 00:09:45.469 }, 00:09:45.469 { 00:09:45.469 "name": "BaseBdev3", 00:09:45.469 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:45.469 "is_configured": true, 00:09:45.469 "data_offset": 2048, 00:09:45.469 "data_size": 63488 00:09:45.469 }, 00:09:45.469 { 00:09:45.469 "name": "BaseBdev4", 00:09:45.469 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:45.469 "is_configured": true, 00:09:45.469 "data_offset": 2048, 00:09:45.469 "data_size": 63488 00:09:45.469 } 00:09:45.469 ] 00:09:45.469 }' 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.469 09:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.727 [2024-10-30 09:44:24.189629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.727 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.728 "name": "Existed_Raid", 00:09:45.728 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:45.728 "strip_size_kb": 0, 00:09:45.728 "state": "configuring", 00:09:45.728 "raid_level": "raid1", 00:09:45.728 "superblock": true, 00:09:45.728 "num_base_bdevs": 4, 00:09:45.728 "num_base_bdevs_discovered": 2, 00:09:45.728 "num_base_bdevs_operational": 4, 00:09:45.728 "base_bdevs_list": [ 00:09:45.728 { 00:09:45.728 "name": "BaseBdev1", 00:09:45.728 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:45.728 "is_configured": true, 00:09:45.728 "data_offset": 2048, 00:09:45.728 "data_size": 63488 00:09:45.728 }, 00:09:45.728 { 00:09:45.728 "name": null, 00:09:45.728 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:45.728 "is_configured": false, 00:09:45.728 "data_offset": 0, 00:09:45.728 "data_size": 63488 00:09:45.728 }, 00:09:45.728 { 00:09:45.728 "name": null, 00:09:45.728 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:45.728 "is_configured": false, 00:09:45.728 "data_offset": 0, 00:09:45.728 "data_size": 63488 00:09:45.728 }, 00:09:45.728 { 00:09:45.728 "name": "BaseBdev4", 00:09:45.728 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:45.728 "is_configured": true, 00:09:45.728 "data_offset": 2048, 00:09:45.728 "data_size": 63488 00:09:45.728 } 00:09:45.728 ] 00:09:45.728 }' 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.728 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 [2024-10-30 09:44:24.561695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.986 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.986 "name": "Existed_Raid", 00:09:45.986 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:45.986 "strip_size_kb": 0, 00:09:45.986 "state": "configuring", 00:09:45.986 "raid_level": "raid1", 00:09:45.986 "superblock": true, 00:09:45.986 "num_base_bdevs": 4, 00:09:45.986 "num_base_bdevs_discovered": 3, 00:09:45.986 "num_base_bdevs_operational": 4, 00:09:45.986 "base_bdevs_list": [ 00:09:45.986 { 00:09:45.986 "name": "BaseBdev1", 00:09:45.986 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:45.986 "is_configured": true, 00:09:45.986 "data_offset": 2048, 00:09:45.986 "data_size": 63488 00:09:45.987 }, 00:09:45.987 { 00:09:45.987 "name": null, 00:09:45.987 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:45.987 "is_configured": false, 00:09:45.987 "data_offset": 0, 00:09:45.987 "data_size": 63488 00:09:45.987 }, 00:09:45.987 { 00:09:45.987 "name": "BaseBdev3", 00:09:45.987 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:45.987 "is_configured": true, 00:09:45.987 "data_offset": 2048, 00:09:45.987 "data_size": 63488 00:09:45.987 }, 00:09:45.987 { 00:09:45.987 "name": "BaseBdev4", 00:09:45.987 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:45.987 "is_configured": true, 00:09:45.987 "data_offset": 2048, 00:09:45.987 "data_size": 63488 00:09:45.987 } 00:09:45.987 ] 00:09:45.987 }' 00:09:45.987 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.987 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.553 [2024-10-30 09:44:24.917796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.553 "name": "Existed_Raid", 00:09:46.553 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:46.553 "strip_size_kb": 0, 00:09:46.553 "state": "configuring", 00:09:46.553 "raid_level": "raid1", 00:09:46.553 "superblock": true, 00:09:46.553 "num_base_bdevs": 4, 00:09:46.553 "num_base_bdevs_discovered": 2, 00:09:46.553 "num_base_bdevs_operational": 4, 00:09:46.553 "base_bdevs_list": [ 00:09:46.553 { 00:09:46.553 "name": null, 00:09:46.553 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:46.553 "is_configured": false, 00:09:46.553 "data_offset": 0, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": null, 00:09:46.553 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:46.553 "is_configured": false, 00:09:46.553 "data_offset": 0, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": "BaseBdev3", 00:09:46.553 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:46.553 "is_configured": true, 00:09:46.553 "data_offset": 2048, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": "BaseBdev4", 00:09:46.553 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:46.553 "is_configured": true, 00:09:46.553 "data_offset": 2048, 00:09:46.553 "data_size": 63488 00:09:46.553 } 00:09:46.553 ] 00:09:46.553 }' 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.553 09:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.812 [2024-10-30 09:44:25.304306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.812 "name": "Existed_Raid", 00:09:46.812 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:46.812 "strip_size_kb": 0, 00:09:46.812 "state": "configuring", 00:09:46.812 "raid_level": "raid1", 00:09:46.812 "superblock": true, 00:09:46.812 "num_base_bdevs": 4, 00:09:46.812 "num_base_bdevs_discovered": 3, 00:09:46.812 "num_base_bdevs_operational": 4, 00:09:46.812 "base_bdevs_list": [ 00:09:46.812 { 00:09:46.812 "name": null, 00:09:46.812 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:46.812 "is_configured": false, 00:09:46.812 "data_offset": 0, 00:09:46.812 "data_size": 63488 00:09:46.812 }, 00:09:46.812 { 00:09:46.812 "name": "BaseBdev2", 00:09:46.812 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:46.812 "is_configured": true, 00:09:46.812 "data_offset": 2048, 00:09:46.812 "data_size": 63488 00:09:46.812 }, 00:09:46.812 { 00:09:46.812 "name": "BaseBdev3", 00:09:46.812 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:46.812 "is_configured": true, 00:09:46.812 "data_offset": 2048, 00:09:46.812 "data_size": 63488 00:09:46.812 }, 00:09:46.812 { 00:09:46.812 "name": "BaseBdev4", 00:09:46.812 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:46.812 "is_configured": true, 00:09:46.812 "data_offset": 2048, 00:09:46.812 "data_size": 63488 00:09:46.812 } 00:09:46.812 ] 00:09:46.812 }' 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.812 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d27216ee-89e0-4e05-bb4a-6364c64e2cc1 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.070 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.328 NewBaseBdev 00:09:47.328 [2024-10-30 09:44:25.690788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:47.328 [2024-10-30 09:44:25.690952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.328 [2024-10-30 09:44:25.690964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.328 [2024-10-30 09:44:25.691186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:47.328 [2024-10-30 09:44:25.691296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.328 [2024-10-30 09:44:25.691302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:47.328 [2024-10-30 09:44:25.691397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.328 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.328 [ 00:09:47.328 { 00:09:47.328 "name": "NewBaseBdev", 00:09:47.328 "aliases": [ 00:09:47.328 "d27216ee-89e0-4e05-bb4a-6364c64e2cc1" 00:09:47.328 ], 00:09:47.328 "product_name": "Malloc disk", 00:09:47.328 "block_size": 512, 00:09:47.328 "num_blocks": 65536, 00:09:47.328 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:47.328 "assigned_rate_limits": { 00:09:47.328 "rw_ios_per_sec": 0, 00:09:47.328 "rw_mbytes_per_sec": 0, 00:09:47.328 "r_mbytes_per_sec": 0, 00:09:47.328 "w_mbytes_per_sec": 0 00:09:47.328 }, 00:09:47.328 "claimed": true, 00:09:47.328 "claim_type": "exclusive_write", 00:09:47.328 "zoned": false, 00:09:47.328 "supported_io_types": { 00:09:47.328 "read": true, 00:09:47.328 "write": true, 00:09:47.328 "unmap": true, 00:09:47.328 "flush": true, 00:09:47.328 "reset": true, 00:09:47.328 "nvme_admin": false, 00:09:47.328 "nvme_io": false, 00:09:47.328 "nvme_io_md": false, 00:09:47.328 "write_zeroes": true, 00:09:47.328 "zcopy": true, 00:09:47.328 "get_zone_info": false, 00:09:47.328 "zone_management": false, 00:09:47.328 "zone_append": false, 00:09:47.328 "compare": false, 00:09:47.328 "compare_and_write": false, 00:09:47.328 "abort": true, 00:09:47.329 "seek_hole": false, 00:09:47.329 "seek_data": false, 00:09:47.329 "copy": true, 00:09:47.329 "nvme_iov_md": false 00:09:47.329 }, 00:09:47.329 "memory_domains": [ 00:09:47.329 { 00:09:47.329 "dma_device_id": "system", 00:09:47.329 "dma_device_type": 1 00:09:47.329 }, 00:09:47.329 { 00:09:47.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.329 "dma_device_type": 2 00:09:47.329 } 00:09:47.329 ], 00:09:47.329 "driver_specific": {} 00:09:47.329 } 00:09:47.329 ] 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.329 "name": "Existed_Raid", 00:09:47.329 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:47.329 "strip_size_kb": 0, 00:09:47.329 "state": "online", 00:09:47.329 "raid_level": "raid1", 00:09:47.329 "superblock": true, 00:09:47.329 "num_base_bdevs": 4, 00:09:47.329 "num_base_bdevs_discovered": 4, 00:09:47.329 "num_base_bdevs_operational": 4, 00:09:47.329 "base_bdevs_list": [ 00:09:47.329 { 00:09:47.329 "name": "NewBaseBdev", 00:09:47.329 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:47.329 "is_configured": true, 00:09:47.329 "data_offset": 2048, 00:09:47.329 "data_size": 63488 00:09:47.329 }, 00:09:47.329 { 00:09:47.329 "name": "BaseBdev2", 00:09:47.329 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:47.329 "is_configured": true, 00:09:47.329 "data_offset": 2048, 00:09:47.329 "data_size": 63488 00:09:47.329 }, 00:09:47.329 { 00:09:47.329 "name": "BaseBdev3", 00:09:47.329 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:47.329 "is_configured": true, 00:09:47.329 "data_offset": 2048, 00:09:47.329 "data_size": 63488 00:09:47.329 }, 00:09:47.329 { 00:09:47.329 "name": "BaseBdev4", 00:09:47.329 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:47.329 "is_configured": true, 00:09:47.329 "data_offset": 2048, 00:09:47.329 "data_size": 63488 00:09:47.329 } 00:09:47.329 ] 00:09:47.329 }' 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.329 09:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.588 [2024-10-30 09:44:26.043183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.588 "name": "Existed_Raid", 00:09:47.588 "aliases": [ 00:09:47.588 "03c9d245-712d-4f45-8a0a-a05ab56490f7" 00:09:47.588 ], 00:09:47.588 "product_name": "Raid Volume", 00:09:47.588 "block_size": 512, 00:09:47.588 "num_blocks": 63488, 00:09:47.588 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:47.588 "assigned_rate_limits": { 00:09:47.588 "rw_ios_per_sec": 0, 00:09:47.588 "rw_mbytes_per_sec": 0, 00:09:47.588 "r_mbytes_per_sec": 0, 00:09:47.588 "w_mbytes_per_sec": 0 00:09:47.588 }, 00:09:47.588 "claimed": false, 00:09:47.588 "zoned": false, 00:09:47.588 "supported_io_types": { 00:09:47.588 "read": true, 00:09:47.588 "write": true, 00:09:47.588 "unmap": false, 00:09:47.588 "flush": false, 00:09:47.588 "reset": true, 00:09:47.588 "nvme_admin": false, 00:09:47.588 "nvme_io": false, 00:09:47.588 "nvme_io_md": false, 00:09:47.588 "write_zeroes": true, 00:09:47.588 "zcopy": false, 00:09:47.588 "get_zone_info": false, 00:09:47.588 "zone_management": false, 00:09:47.588 "zone_append": false, 00:09:47.588 "compare": false, 00:09:47.588 "compare_and_write": false, 00:09:47.588 "abort": false, 00:09:47.588 "seek_hole": false, 00:09:47.588 "seek_data": false, 00:09:47.588 "copy": false, 00:09:47.588 "nvme_iov_md": false 00:09:47.588 }, 00:09:47.588 "memory_domains": [ 00:09:47.588 { 00:09:47.588 "dma_device_id": "system", 00:09:47.588 "dma_device_type": 1 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.588 "dma_device_type": 2 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "system", 00:09:47.588 "dma_device_type": 1 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.588 "dma_device_type": 2 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "system", 00:09:47.588 "dma_device_type": 1 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.588 "dma_device_type": 2 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "system", 00:09:47.588 "dma_device_type": 1 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.588 "dma_device_type": 2 00:09:47.588 } 00:09:47.588 ], 00:09:47.588 "driver_specific": { 00:09:47.588 "raid": { 00:09:47.588 "uuid": "03c9d245-712d-4f45-8a0a-a05ab56490f7", 00:09:47.588 "strip_size_kb": 0, 00:09:47.588 "state": "online", 00:09:47.588 "raid_level": "raid1", 00:09:47.588 "superblock": true, 00:09:47.588 "num_base_bdevs": 4, 00:09:47.588 "num_base_bdevs_discovered": 4, 00:09:47.588 "num_base_bdevs_operational": 4, 00:09:47.588 "base_bdevs_list": [ 00:09:47.588 { 00:09:47.588 "name": "NewBaseBdev", 00:09:47.588 "uuid": "d27216ee-89e0-4e05-bb4a-6364c64e2cc1", 00:09:47.588 "is_configured": true, 00:09:47.588 "data_offset": 2048, 00:09:47.588 "data_size": 63488 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "name": "BaseBdev2", 00:09:47.588 "uuid": "9363e24a-69ab-4ee3-a4c9-6aa1a7d8f669", 00:09:47.588 "is_configured": true, 00:09:47.588 "data_offset": 2048, 00:09:47.588 "data_size": 63488 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "name": "BaseBdev3", 00:09:47.588 "uuid": "e8b426d7-630e-42ff-a96a-9f1f24a4a4fc", 00:09:47.588 "is_configured": true, 00:09:47.588 "data_offset": 2048, 00:09:47.588 "data_size": 63488 00:09:47.588 }, 00:09:47.588 { 00:09:47.588 "name": "BaseBdev4", 00:09:47.588 "uuid": "fe2c32ec-3b54-42bb-a460-2f313864b4a5", 00:09:47.588 "is_configured": true, 00:09:47.588 "data_offset": 2048, 00:09:47.588 "data_size": 63488 00:09:47.588 } 00:09:47.588 ] 00:09:47.588 } 00:09:47.588 } 00:09:47.588 }' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.588 BaseBdev2 00:09:47.588 BaseBdev3 00:09:47.588 BaseBdev4' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.588 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.846 [2024-10-30 09:44:26.270911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.846 [2024-10-30 09:44:26.271008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.846 [2024-10-30 09:44:26.271083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.846 [2024-10-30 09:44:26.271313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.846 [2024-10-30 09:44:26.271323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71982 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71982 ']' 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71982 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71982 00:09:47.846 killing process with pid 71982 00:09:47.846 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.847 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.847 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71982' 00:09:47.847 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71982 00:09:47.847 [2024-10-30 09:44:26.302199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.847 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71982 00:09:48.103 [2024-10-30 09:44:26.488722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.712 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:48.712 00:09:48.712 real 0m8.017s 00:09:48.712 user 0m13.050s 00:09:48.712 sys 0m1.284s 00:09:48.712 ************************************ 00:09:48.712 END TEST raid_state_function_test_sb 00:09:48.712 ************************************ 00:09:48.712 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.712 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 09:44:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:09:48.712 09:44:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:48.712 09:44:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.712 09:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 ************************************ 00:09:48.712 START TEST raid_superblock_test 00:09:48.712 ************************************ 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72619 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72619 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72619 ']' 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:48.712 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 [2024-10-30 09:44:27.149318] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:48.712 [2024-10-30 09:44:27.149579] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72619 ] 00:09:48.712 [2024-10-30 09:44:27.303990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.970 [2024-10-30 09:44:27.384321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.970 [2024-10-30 09:44:27.491211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.970 [2024-10-30 09:44:27.491346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.534 09:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.534 malloc1 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.534 [2024-10-30 09:44:28.022967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.534 [2024-10-30 09:44:28.023132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.534 [2024-10-30 09:44:28.023166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:49.534 [2024-10-30 09:44:28.023359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.534 [2024-10-30 09:44:28.025131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.534 [2024-10-30 09:44:28.025219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.534 pt1 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.534 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 malloc2 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 [2024-10-30 09:44:28.058320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.535 [2024-10-30 09:44:28.058364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.535 [2024-10-30 09:44:28.058379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:49.535 [2024-10-30 09:44:28.058386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.535 [2024-10-30 09:44:28.060098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.535 [2024-10-30 09:44:28.060123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.535 pt2 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 malloc3 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 [2024-10-30 09:44:28.110683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.535 [2024-10-30 09:44:28.110730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.535 [2024-10-30 09:44:28.110746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:49.535 [2024-10-30 09:44:28.110753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.535 [2024-10-30 09:44:28.112451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.535 [2024-10-30 09:44:28.112480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.535 pt3 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 malloc4 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 [2024-10-30 09:44:28.141958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:49.535 [2024-10-30 09:44:28.141994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.535 [2024-10-30 09:44:28.142008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:49.535 [2024-10-30 09:44:28.142015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.535 [2024-10-30 09:44:28.143739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.535 [2024-10-30 09:44:28.143767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:49.535 pt4 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.535 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.535 [2024-10-30 09:44:28.149982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.535 [2024-10-30 09:44:28.151570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.535 [2024-10-30 09:44:28.151619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.535 [2024-10-30 09:44:28.151655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:49.535 [2024-10-30 09:44:28.151800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:49.535 [2024-10-30 09:44:28.151813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.535 [2024-10-30 09:44:28.152026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.535 [2024-10-30 09:44:28.152161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:49.535 [2024-10-30 09:44:28.152172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:49.535 [2024-10-30 09:44:28.152278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.792 "name": "raid_bdev1", 00:09:49.792 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:49.792 "strip_size_kb": 0, 00:09:49.792 "state": "online", 00:09:49.792 "raid_level": "raid1", 00:09:49.792 "superblock": true, 00:09:49.792 "num_base_bdevs": 4, 00:09:49.792 "num_base_bdevs_discovered": 4, 00:09:49.792 "num_base_bdevs_operational": 4, 00:09:49.792 "base_bdevs_list": [ 00:09:49.792 { 00:09:49.792 "name": "pt1", 00:09:49.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.792 "is_configured": true, 00:09:49.792 "data_offset": 2048, 00:09:49.792 "data_size": 63488 00:09:49.792 }, 00:09:49.792 { 00:09:49.792 "name": "pt2", 00:09:49.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.792 "is_configured": true, 00:09:49.792 "data_offset": 2048, 00:09:49.792 "data_size": 63488 00:09:49.792 }, 00:09:49.792 { 00:09:49.792 "name": "pt3", 00:09:49.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.792 "is_configured": true, 00:09:49.792 "data_offset": 2048, 00:09:49.792 "data_size": 63488 00:09:49.792 }, 00:09:49.792 { 00:09:49.792 "name": "pt4", 00:09:49.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.792 "is_configured": true, 00:09:49.792 "data_offset": 2048, 00:09:49.792 "data_size": 63488 00:09:49.792 } 00:09:49.792 ] 00:09:49.792 }' 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.792 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.049 [2024-10-30 09:44:28.498317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.049 "name": "raid_bdev1", 00:09:50.049 "aliases": [ 00:09:50.049 "192b01ab-a172-4230-a02e-46d17117fc96" 00:09:50.049 ], 00:09:50.049 "product_name": "Raid Volume", 00:09:50.049 "block_size": 512, 00:09:50.049 "num_blocks": 63488, 00:09:50.049 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:50.049 "assigned_rate_limits": { 00:09:50.049 "rw_ios_per_sec": 0, 00:09:50.049 "rw_mbytes_per_sec": 0, 00:09:50.049 "r_mbytes_per_sec": 0, 00:09:50.049 "w_mbytes_per_sec": 0 00:09:50.049 }, 00:09:50.049 "claimed": false, 00:09:50.049 "zoned": false, 00:09:50.049 "supported_io_types": { 00:09:50.049 "read": true, 00:09:50.049 "write": true, 00:09:50.049 "unmap": false, 00:09:50.049 "flush": false, 00:09:50.049 "reset": true, 00:09:50.049 "nvme_admin": false, 00:09:50.049 "nvme_io": false, 00:09:50.049 "nvme_io_md": false, 00:09:50.049 "write_zeroes": true, 00:09:50.049 "zcopy": false, 00:09:50.049 "get_zone_info": false, 00:09:50.049 "zone_management": false, 00:09:50.049 "zone_append": false, 00:09:50.049 "compare": false, 00:09:50.049 "compare_and_write": false, 00:09:50.049 "abort": false, 00:09:50.049 "seek_hole": false, 00:09:50.049 "seek_data": false, 00:09:50.049 "copy": false, 00:09:50.049 "nvme_iov_md": false 00:09:50.049 }, 00:09:50.049 "memory_domains": [ 00:09:50.049 { 00:09:50.049 "dma_device_id": "system", 00:09:50.049 "dma_device_type": 1 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.049 "dma_device_type": 2 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "system", 00:09:50.049 "dma_device_type": 1 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.049 "dma_device_type": 2 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "system", 00:09:50.049 "dma_device_type": 1 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.049 "dma_device_type": 2 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "system", 00:09:50.049 "dma_device_type": 1 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.049 "dma_device_type": 2 00:09:50.049 } 00:09:50.049 ], 00:09:50.049 "driver_specific": { 00:09:50.049 "raid": { 00:09:50.049 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:50.049 "strip_size_kb": 0, 00:09:50.049 "state": "online", 00:09:50.049 "raid_level": "raid1", 00:09:50.049 "superblock": true, 00:09:50.049 "num_base_bdevs": 4, 00:09:50.049 "num_base_bdevs_discovered": 4, 00:09:50.049 "num_base_bdevs_operational": 4, 00:09:50.049 "base_bdevs_list": [ 00:09:50.049 { 00:09:50.049 "name": "pt1", 00:09:50.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.049 "is_configured": true, 00:09:50.049 "data_offset": 2048, 00:09:50.049 "data_size": 63488 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "name": "pt2", 00:09:50.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.049 "is_configured": true, 00:09:50.049 "data_offset": 2048, 00:09:50.049 "data_size": 63488 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "name": "pt3", 00:09:50.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.049 "is_configured": true, 00:09:50.049 "data_offset": 2048, 00:09:50.049 "data_size": 63488 00:09:50.049 }, 00:09:50.049 { 00:09:50.049 "name": "pt4", 00:09:50.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.049 "is_configured": true, 00:09:50.049 "data_offset": 2048, 00:09:50.049 "data_size": 63488 00:09:50.049 } 00:09:50.049 ] 00:09:50.049 } 00:09:50.049 } 00:09:50.049 }' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.049 pt2 00:09:50.049 pt3 00:09:50.049 pt4' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.049 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:50.306 [2024-10-30 09:44:28.710321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=192b01ab-a172-4230-a02e-46d17117fc96 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 192b01ab-a172-4230-a02e-46d17117fc96 ']' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.306 [2024-10-30 09:44:28.742077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.306 [2024-10-30 09:44:28.742122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.306 [2024-10-30 09:44:28.742193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.306 [2024-10-30 09:44:28.742273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.306 [2024-10-30 09:44:28.742305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.306 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 [2024-10-30 09:44:28.850115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:50.307 [2024-10-30 09:44:28.851636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:50.307 [2024-10-30 09:44:28.851675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:50.307 [2024-10-30 09:44:28.851701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:50.307 [2024-10-30 09:44:28.851740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:50.307 [2024-10-30 09:44:28.851782] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:50.307 [2024-10-30 09:44:28.851798] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:50.307 [2024-10-30 09:44:28.851813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:50.307 [2024-10-30 09:44:28.851823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.307 [2024-10-30 09:44:28.851832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:50.307 request: 00:09:50.307 { 00:09:50.307 "name": "raid_bdev1", 00:09:50.307 "raid_level": "raid1", 00:09:50.307 "base_bdevs": [ 00:09:50.307 "malloc1", 00:09:50.307 "malloc2", 00:09:50.307 "malloc3", 00:09:50.307 "malloc4" 00:09:50.307 ], 00:09:50.307 "superblock": false, 00:09:50.307 "method": "bdev_raid_create", 00:09:50.307 "req_id": 1 00:09:50.307 } 00:09:50.307 Got JSON-RPC error response 00:09:50.307 response: 00:09:50.307 { 00:09:50.307 "code": -17, 00:09:50.307 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:50.307 } 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.307 [2024-10-30 09:44:28.906122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.307 [2024-10-30 09:44:28.906172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.307 [2024-10-30 09:44:28.906185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:50.307 [2024-10-30 09:44:28.906194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.307 [2024-10-30 09:44:28.907968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.307 [2024-10-30 09:44:28.908111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.307 [2024-10-30 09:44:28.908184] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:50.307 [2024-10-30 09:44:28.908228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.307 pt1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.307 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.564 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.564 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.564 "name": "raid_bdev1", 00:09:50.564 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:50.564 "strip_size_kb": 0, 00:09:50.564 "state": "configuring", 00:09:50.564 "raid_level": "raid1", 00:09:50.564 "superblock": true, 00:09:50.564 "num_base_bdevs": 4, 00:09:50.564 "num_base_bdevs_discovered": 1, 00:09:50.564 "num_base_bdevs_operational": 4, 00:09:50.564 "base_bdevs_list": [ 00:09:50.564 { 00:09:50.564 "name": "pt1", 00:09:50.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.564 "is_configured": true, 00:09:50.564 "data_offset": 2048, 00:09:50.564 "data_size": 63488 00:09:50.564 }, 00:09:50.564 { 00:09:50.564 "name": null, 00:09:50.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.564 "is_configured": false, 00:09:50.564 "data_offset": 2048, 00:09:50.564 "data_size": 63488 00:09:50.564 }, 00:09:50.564 { 00:09:50.564 "name": null, 00:09:50.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.564 "is_configured": false, 00:09:50.564 "data_offset": 2048, 00:09:50.564 "data_size": 63488 00:09:50.564 }, 00:09:50.564 { 00:09:50.564 "name": null, 00:09:50.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.564 "is_configured": false, 00:09:50.564 "data_offset": 2048, 00:09:50.564 "data_size": 63488 00:09:50.564 } 00:09:50.564 ] 00:09:50.564 }' 00:09:50.564 09:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.564 09:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.821 [2024-10-30 09:44:29.214192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.821 [2024-10-30 09:44:29.214248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.821 [2024-10-30 09:44:29.214262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:50.821 [2024-10-30 09:44:29.214270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.821 [2024-10-30 09:44:29.214603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.821 [2024-10-30 09:44:29.214621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.821 [2024-10-30 09:44:29.214679] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.821 [2024-10-30 09:44:29.214699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.821 pt2 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.821 [2024-10-30 09:44:29.222208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.821 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.821 "name": "raid_bdev1", 00:09:50.821 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:50.822 "strip_size_kb": 0, 00:09:50.822 "state": "configuring", 00:09:50.822 "raid_level": "raid1", 00:09:50.822 "superblock": true, 00:09:50.822 "num_base_bdevs": 4, 00:09:50.822 "num_base_bdevs_discovered": 1, 00:09:50.822 "num_base_bdevs_operational": 4, 00:09:50.822 "base_bdevs_list": [ 00:09:50.822 { 00:09:50.822 "name": "pt1", 00:09:50.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.822 "is_configured": true, 00:09:50.822 "data_offset": 2048, 00:09:50.822 "data_size": 63488 00:09:50.822 }, 00:09:50.822 { 00:09:50.822 "name": null, 00:09:50.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.822 "is_configured": false, 00:09:50.822 "data_offset": 0, 00:09:50.822 "data_size": 63488 00:09:50.822 }, 00:09:50.822 { 00:09:50.822 "name": null, 00:09:50.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.822 "is_configured": false, 00:09:50.822 "data_offset": 2048, 00:09:50.822 "data_size": 63488 00:09:50.822 }, 00:09:50.822 { 00:09:50.822 "name": null, 00:09:50.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.822 "is_configured": false, 00:09:50.822 "data_offset": 2048, 00:09:50.822 "data_size": 63488 00:09:50.822 } 00:09:50.822 ] 00:09:50.822 }' 00:09:50.822 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.822 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.080 [2024-10-30 09:44:29.550244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:51.080 [2024-10-30 09:44:29.550464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.080 [2024-10-30 09:44:29.550490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:51.080 [2024-10-30 09:44:29.550497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.080 [2024-10-30 09:44:29.550837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.080 [2024-10-30 09:44:29.550847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:51.080 [2024-10-30 09:44:29.550908] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:51.080 [2024-10-30 09:44:29.550923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.080 pt2 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.080 [2024-10-30 09:44:29.558244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:51.080 [2024-10-30 09:44:29.558288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.080 [2024-10-30 09:44:29.558302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:51.080 [2024-10-30 09:44:29.558309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.080 [2024-10-30 09:44:29.558632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.080 [2024-10-30 09:44:29.558641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:51.080 [2024-10-30 09:44:29.558696] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:51.080 [2024-10-30 09:44:29.558711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:51.080 pt3 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.080 [2024-10-30 09:44:29.566224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:51.080 [2024-10-30 09:44:29.566264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.080 [2024-10-30 09:44:29.566279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:51.080 [2024-10-30 09:44:29.566285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.080 [2024-10-30 09:44:29.566612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.080 [2024-10-30 09:44:29.566632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:51.080 [2024-10-30 09:44:29.566687] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:51.080 [2024-10-30 09:44:29.566700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:51.080 [2024-10-30 09:44:29.566815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:51.080 [2024-10-30 09:44:29.566825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.080 [2024-10-30 09:44:29.567024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:51.080 [2024-10-30 09:44:29.567155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:51.080 [2024-10-30 09:44:29.567164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:51.080 [2024-10-30 09:44:29.567266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.080 pt4 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.080 "name": "raid_bdev1", 00:09:51.080 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:51.080 "strip_size_kb": 0, 00:09:51.080 "state": "online", 00:09:51.080 "raid_level": "raid1", 00:09:51.080 "superblock": true, 00:09:51.080 "num_base_bdevs": 4, 00:09:51.080 "num_base_bdevs_discovered": 4, 00:09:51.080 "num_base_bdevs_operational": 4, 00:09:51.080 "base_bdevs_list": [ 00:09:51.080 { 00:09:51.080 "name": "pt1", 00:09:51.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.080 "is_configured": true, 00:09:51.080 "data_offset": 2048, 00:09:51.080 "data_size": 63488 00:09:51.080 }, 00:09:51.080 { 00:09:51.080 "name": "pt2", 00:09:51.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.080 "is_configured": true, 00:09:51.080 "data_offset": 2048, 00:09:51.080 "data_size": 63488 00:09:51.080 }, 00:09:51.080 { 00:09:51.080 "name": "pt3", 00:09:51.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.080 "is_configured": true, 00:09:51.080 "data_offset": 2048, 00:09:51.080 "data_size": 63488 00:09:51.080 }, 00:09:51.080 { 00:09:51.080 "name": "pt4", 00:09:51.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:51.080 "is_configured": true, 00:09:51.080 "data_offset": 2048, 00:09:51.080 "data_size": 63488 00:09:51.080 } 00:09:51.080 ] 00:09:51.080 }' 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.080 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.338 [2024-10-30 09:44:29.886599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.338 "name": "raid_bdev1", 00:09:51.338 "aliases": [ 00:09:51.338 "192b01ab-a172-4230-a02e-46d17117fc96" 00:09:51.338 ], 00:09:51.338 "product_name": "Raid Volume", 00:09:51.338 "block_size": 512, 00:09:51.338 "num_blocks": 63488, 00:09:51.338 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:51.338 "assigned_rate_limits": { 00:09:51.338 "rw_ios_per_sec": 0, 00:09:51.338 "rw_mbytes_per_sec": 0, 00:09:51.338 "r_mbytes_per_sec": 0, 00:09:51.338 "w_mbytes_per_sec": 0 00:09:51.338 }, 00:09:51.338 "claimed": false, 00:09:51.338 "zoned": false, 00:09:51.338 "supported_io_types": { 00:09:51.338 "read": true, 00:09:51.338 "write": true, 00:09:51.338 "unmap": false, 00:09:51.338 "flush": false, 00:09:51.338 "reset": true, 00:09:51.338 "nvme_admin": false, 00:09:51.338 "nvme_io": false, 00:09:51.338 "nvme_io_md": false, 00:09:51.338 "write_zeroes": true, 00:09:51.338 "zcopy": false, 00:09:51.338 "get_zone_info": false, 00:09:51.338 "zone_management": false, 00:09:51.338 "zone_append": false, 00:09:51.338 "compare": false, 00:09:51.338 "compare_and_write": false, 00:09:51.338 "abort": false, 00:09:51.338 "seek_hole": false, 00:09:51.338 "seek_data": false, 00:09:51.338 "copy": false, 00:09:51.338 "nvme_iov_md": false 00:09:51.338 }, 00:09:51.338 "memory_domains": [ 00:09:51.338 { 00:09:51.338 "dma_device_id": "system", 00:09:51.338 "dma_device_type": 1 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.338 "dma_device_type": 2 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "system", 00:09:51.338 "dma_device_type": 1 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.338 "dma_device_type": 2 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "system", 00:09:51.338 "dma_device_type": 1 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.338 "dma_device_type": 2 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "system", 00:09:51.338 "dma_device_type": 1 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.338 "dma_device_type": 2 00:09:51.338 } 00:09:51.338 ], 00:09:51.338 "driver_specific": { 00:09:51.338 "raid": { 00:09:51.338 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:51.338 "strip_size_kb": 0, 00:09:51.338 "state": "online", 00:09:51.338 "raid_level": "raid1", 00:09:51.338 "superblock": true, 00:09:51.338 "num_base_bdevs": 4, 00:09:51.338 "num_base_bdevs_discovered": 4, 00:09:51.338 "num_base_bdevs_operational": 4, 00:09:51.338 "base_bdevs_list": [ 00:09:51.338 { 00:09:51.338 "name": "pt1", 00:09:51.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.338 "is_configured": true, 00:09:51.338 "data_offset": 2048, 00:09:51.338 "data_size": 63488 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "name": "pt2", 00:09:51.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.338 "is_configured": true, 00:09:51.338 "data_offset": 2048, 00:09:51.338 "data_size": 63488 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "name": "pt3", 00:09:51.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.338 "is_configured": true, 00:09:51.338 "data_offset": 2048, 00:09:51.338 "data_size": 63488 00:09:51.338 }, 00:09:51.338 { 00:09:51.338 "name": "pt4", 00:09:51.338 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:51.338 "is_configured": true, 00:09:51.338 "data_offset": 2048, 00:09:51.338 "data_size": 63488 00:09:51.338 } 00:09:51.338 ] 00:09:51.338 } 00:09:51.338 } 00:09:51.338 }' 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.338 pt2 00:09:51.338 pt3 00:09:51.338 pt4' 00:09:51.338 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.595 09:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 [2024-10-30 09:44:30.102607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 192b01ab-a172-4230-a02e-46d17117fc96 '!=' 192b01ab-a172-4230-a02e-46d17117fc96 ']' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 [2024-10-30 09:44:30.130373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.595 "name": "raid_bdev1", 00:09:51.595 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:51.595 "strip_size_kb": 0, 00:09:51.595 "state": "online", 00:09:51.595 "raid_level": "raid1", 00:09:51.595 "superblock": true, 00:09:51.595 "num_base_bdevs": 4, 00:09:51.595 "num_base_bdevs_discovered": 3, 00:09:51.595 "num_base_bdevs_operational": 3, 00:09:51.595 "base_bdevs_list": [ 00:09:51.595 { 00:09:51.595 "name": null, 00:09:51.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.595 "is_configured": false, 00:09:51.595 "data_offset": 0, 00:09:51.595 "data_size": 63488 00:09:51.595 }, 00:09:51.595 { 00:09:51.595 "name": "pt2", 00:09:51.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.595 "is_configured": true, 00:09:51.595 "data_offset": 2048, 00:09:51.595 "data_size": 63488 00:09:51.595 }, 00:09:51.595 { 00:09:51.595 "name": "pt3", 00:09:51.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.595 "is_configured": true, 00:09:51.595 "data_offset": 2048, 00:09:51.595 "data_size": 63488 00:09:51.595 }, 00:09:51.595 { 00:09:51.595 "name": "pt4", 00:09:51.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:51.595 "is_configured": true, 00:09:51.595 "data_offset": 2048, 00:09:51.595 "data_size": 63488 00:09:51.595 } 00:09:51.595 ] 00:09:51.595 }' 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.595 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.852 [2024-10-30 09:44:30.462392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.852 [2024-10-30 09:44:30.462415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.852 [2024-10-30 09:44:30.462466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.852 [2024-10-30 09:44:30.462529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.852 [2024-10-30 09:44:30.462537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.852 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 [2024-10-30 09:44:30.522398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.131 [2024-10-30 09:44:30.522440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.131 [2024-10-30 09:44:30.522454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:52.131 [2024-10-30 09:44:30.522461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.131 [2024-10-30 09:44:30.524256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.131 [2024-10-30 09:44:30.524285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.131 [2024-10-30 09:44:30.524342] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.131 [2024-10-30 09:44:30.524374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.131 pt2 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.131 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.131 "name": "raid_bdev1", 00:09:52.131 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:52.131 "strip_size_kb": 0, 00:09:52.131 "state": "configuring", 00:09:52.131 "raid_level": "raid1", 00:09:52.131 "superblock": true, 00:09:52.131 "num_base_bdevs": 4, 00:09:52.131 "num_base_bdevs_discovered": 1, 00:09:52.131 "num_base_bdevs_operational": 3, 00:09:52.131 "base_bdevs_list": [ 00:09:52.131 { 00:09:52.131 "name": null, 00:09:52.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.131 "is_configured": false, 00:09:52.131 "data_offset": 2048, 00:09:52.131 "data_size": 63488 00:09:52.131 }, 00:09:52.131 { 00:09:52.131 "name": "pt2", 00:09:52.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.131 "is_configured": true, 00:09:52.131 "data_offset": 2048, 00:09:52.131 "data_size": 63488 00:09:52.131 }, 00:09:52.131 { 00:09:52.131 "name": null, 00:09:52.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.131 "is_configured": false, 00:09:52.132 "data_offset": 2048, 00:09:52.132 "data_size": 63488 00:09:52.132 }, 00:09:52.132 { 00:09:52.132 "name": null, 00:09:52.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.132 "is_configured": false, 00:09:52.132 "data_offset": 2048, 00:09:52.132 "data_size": 63488 00:09:52.132 } 00:09:52.132 ] 00:09:52.132 }' 00:09:52.132 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.132 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 [2024-10-30 09:44:30.842486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.389 [2024-10-30 09:44:30.842534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.389 [2024-10-30 09:44:30.842550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:09:52.389 [2024-10-30 09:44:30.842558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.389 [2024-10-30 09:44:30.842901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.389 [2024-10-30 09:44:30.842918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.389 [2024-10-30 09:44:30.842978] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.389 [2024-10-30 09:44:30.842994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.389 pt3 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.389 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.390 "name": "raid_bdev1", 00:09:52.390 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:52.390 "strip_size_kb": 0, 00:09:52.390 "state": "configuring", 00:09:52.390 "raid_level": "raid1", 00:09:52.390 "superblock": true, 00:09:52.390 "num_base_bdevs": 4, 00:09:52.390 "num_base_bdevs_discovered": 2, 00:09:52.390 "num_base_bdevs_operational": 3, 00:09:52.390 "base_bdevs_list": [ 00:09:52.390 { 00:09:52.390 "name": null, 00:09:52.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.390 "is_configured": false, 00:09:52.390 "data_offset": 2048, 00:09:52.390 "data_size": 63488 00:09:52.390 }, 00:09:52.390 { 00:09:52.390 "name": "pt2", 00:09:52.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.390 "is_configured": true, 00:09:52.390 "data_offset": 2048, 00:09:52.390 "data_size": 63488 00:09:52.390 }, 00:09:52.390 { 00:09:52.390 "name": "pt3", 00:09:52.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.390 "is_configured": true, 00:09:52.390 "data_offset": 2048, 00:09:52.390 "data_size": 63488 00:09:52.390 }, 00:09:52.390 { 00:09:52.390 "name": null, 00:09:52.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.390 "is_configured": false, 00:09:52.390 "data_offset": 2048, 00:09:52.390 "data_size": 63488 00:09:52.390 } 00:09:52.390 ] 00:09:52.390 }' 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.390 09:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 [2024-10-30 09:44:31.154551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:52.648 [2024-10-30 09:44:31.154604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.648 [2024-10-30 09:44:31.154621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:09:52.648 [2024-10-30 09:44:31.154628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.648 [2024-10-30 09:44:31.154961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.648 [2024-10-30 09:44:31.154972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:52.648 [2024-10-30 09:44:31.155030] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:52.648 [2024-10-30 09:44:31.155049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:52.648 [2024-10-30 09:44:31.155157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.648 [2024-10-30 09:44:31.155164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.648 [2024-10-30 09:44:31.155355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:52.648 [2024-10-30 09:44:31.155464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.648 [2024-10-30 09:44:31.155472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:52.648 [2024-10-30 09:44:31.155569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.648 pt4 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.648 "name": "raid_bdev1", 00:09:52.648 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:52.648 "strip_size_kb": 0, 00:09:52.648 "state": "online", 00:09:52.648 "raid_level": "raid1", 00:09:52.648 "superblock": true, 00:09:52.648 "num_base_bdevs": 4, 00:09:52.648 "num_base_bdevs_discovered": 3, 00:09:52.648 "num_base_bdevs_operational": 3, 00:09:52.648 "base_bdevs_list": [ 00:09:52.648 { 00:09:52.648 "name": null, 00:09:52.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.648 "is_configured": false, 00:09:52.648 "data_offset": 2048, 00:09:52.648 "data_size": 63488 00:09:52.648 }, 00:09:52.648 { 00:09:52.648 "name": "pt2", 00:09:52.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.648 "is_configured": true, 00:09:52.648 "data_offset": 2048, 00:09:52.648 "data_size": 63488 00:09:52.648 }, 00:09:52.648 { 00:09:52.648 "name": "pt3", 00:09:52.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.648 "is_configured": true, 00:09:52.648 "data_offset": 2048, 00:09:52.648 "data_size": 63488 00:09:52.648 }, 00:09:52.648 { 00:09:52.648 "name": "pt4", 00:09:52.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.648 "is_configured": true, 00:09:52.648 "data_offset": 2048, 00:09:52.648 "data_size": 63488 00:09:52.648 } 00:09:52.648 ] 00:09:52.648 }' 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.648 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 [2024-10-30 09:44:31.458562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.907 [2024-10-30 09:44:31.458581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.907 [2024-10-30 09:44:31.458632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.907 [2024-10-30 09:44:31.458689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.907 [2024-10-30 09:44:31.458697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 [2024-10-30 09:44:31.506563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.907 [2024-10-30 09:44:31.506607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.907 [2024-10-30 09:44:31.506618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:09:52.907 [2024-10-30 09:44:31.506626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.907 [2024-10-30 09:44:31.508408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.907 [2024-10-30 09:44:31.508439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.907 [2024-10-30 09:44:31.508497] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:52.907 [2024-10-30 09:44:31.508531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:52.907 [2024-10-30 09:44:31.508620] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:52.907 [2024-10-30 09:44:31.508630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.907 [2024-10-30 09:44:31.508642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:52.907 [2024-10-30 09:44:31.508685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.907 [2024-10-30 09:44:31.508764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.907 pt1 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.907 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.165 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.165 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.165 "name": "raid_bdev1", 00:09:53.165 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:53.165 "strip_size_kb": 0, 00:09:53.165 "state": "configuring", 00:09:53.165 "raid_level": "raid1", 00:09:53.165 "superblock": true, 00:09:53.165 "num_base_bdevs": 4, 00:09:53.165 "num_base_bdevs_discovered": 2, 00:09:53.165 "num_base_bdevs_operational": 3, 00:09:53.165 "base_bdevs_list": [ 00:09:53.165 { 00:09:53.165 "name": null, 00:09:53.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.165 "is_configured": false, 00:09:53.165 "data_offset": 2048, 00:09:53.165 "data_size": 63488 00:09:53.165 }, 00:09:53.165 { 00:09:53.165 "name": "pt2", 00:09:53.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.165 "is_configured": true, 00:09:53.165 "data_offset": 2048, 00:09:53.165 "data_size": 63488 00:09:53.165 }, 00:09:53.165 { 00:09:53.165 "name": "pt3", 00:09:53.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.165 "is_configured": true, 00:09:53.165 "data_offset": 2048, 00:09:53.165 "data_size": 63488 00:09:53.165 }, 00:09:53.165 { 00:09:53.165 "name": null, 00:09:53.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:53.165 "is_configured": false, 00:09:53.165 "data_offset": 2048, 00:09:53.165 "data_size": 63488 00:09:53.165 } 00:09:53.165 ] 00:09:53.165 }' 00:09:53.165 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.165 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.423 [2024-10-30 09:44:31.838663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:53.423 [2024-10-30 09:44:31.838711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.423 [2024-10-30 09:44:31.838729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:09:53.423 [2024-10-30 09:44:31.838736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.423 [2024-10-30 09:44:31.839079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.423 [2024-10-30 09:44:31.839091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:53.423 [2024-10-30 09:44:31.839151] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:53.423 [2024-10-30 09:44:31.839171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:53.423 [2024-10-30 09:44:31.839270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:53.423 [2024-10-30 09:44:31.839276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.423 [2024-10-30 09:44:31.839470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:09:53.423 [2024-10-30 09:44:31.839575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:53.423 [2024-10-30 09:44:31.839583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:53.423 [2024-10-30 09:44:31.839683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.423 pt4 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.423 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.423 "name": "raid_bdev1", 00:09:53.423 "uuid": "192b01ab-a172-4230-a02e-46d17117fc96", 00:09:53.423 "strip_size_kb": 0, 00:09:53.423 "state": "online", 00:09:53.423 "raid_level": "raid1", 00:09:53.423 "superblock": true, 00:09:53.423 "num_base_bdevs": 4, 00:09:53.423 "num_base_bdevs_discovered": 3, 00:09:53.423 "num_base_bdevs_operational": 3, 00:09:53.423 "base_bdevs_list": [ 00:09:53.423 { 00:09:53.423 "name": null, 00:09:53.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.424 "is_configured": false, 00:09:53.424 "data_offset": 2048, 00:09:53.424 "data_size": 63488 00:09:53.424 }, 00:09:53.424 { 00:09:53.424 "name": "pt2", 00:09:53.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.424 "is_configured": true, 00:09:53.424 "data_offset": 2048, 00:09:53.424 "data_size": 63488 00:09:53.424 }, 00:09:53.424 { 00:09:53.424 "name": "pt3", 00:09:53.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.424 "is_configured": true, 00:09:53.424 "data_offset": 2048, 00:09:53.424 "data_size": 63488 00:09:53.424 }, 00:09:53.424 { 00:09:53.424 "name": "pt4", 00:09:53.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:53.424 "is_configured": true, 00:09:53.424 "data_offset": 2048, 00:09:53.424 "data_size": 63488 00:09:53.424 } 00:09:53.424 ] 00:09:53.424 }' 00:09:53.424 09:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.424 09:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:53.681 [2024-10-30 09:44:32.190977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 192b01ab-a172-4230-a02e-46d17117fc96 '!=' 192b01ab-a172-4230-a02e-46d17117fc96 ']' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72619 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72619 ']' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72619 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72619 00:09:53.681 killing process with pid 72619 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72619' 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72619 00:09:53.681 [2024-10-30 09:44:32.240627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.681 [2024-10-30 09:44:32.240694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.681 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72619 00:09:53.681 [2024-10-30 09:44:32.240753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.681 [2024-10-30 09:44:32.240763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:53.939 [2024-10-30 09:44:32.434519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.504 09:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:54.504 00:09:54.504 real 0m5.906s 00:09:54.504 user 0m9.431s 00:09:54.504 sys 0m0.990s 00:09:54.504 ************************************ 00:09:54.504 END TEST raid_superblock_test 00:09:54.504 ************************************ 00:09:54.504 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.504 09:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 09:44:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:09:54.504 09:44:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:54.504 09:44:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.504 09:44:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 ************************************ 00:09:54.504 START TEST raid_read_error_test 00:09:54.504 ************************************ 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5DFhzev8mV 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73084 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73084 00:09:54.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73084 ']' 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:54.504 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 [2024-10-30 09:44:33.106427] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:54.504 [2024-10-30 09:44:33.106543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73084 ] 00:09:54.762 [2024-10-30 09:44:33.262225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.763 [2024-10-30 09:44:33.342751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.021 [2024-10-30 09:44:33.451318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.021 [2024-10-30 09:44:33.451356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 BaseBdev1_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 true 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 [2024-10-30 09:44:33.938930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.586 [2024-10-30 09:44:33.938978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.586 [2024-10-30 09:44:33.938994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.586 [2024-10-30 09:44:33.939003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.586 [2024-10-30 09:44:33.940790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.586 [2024-10-30 09:44:33.940824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.586 BaseBdev1 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 BaseBdev2_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 true 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 [2024-10-30 09:44:33.978281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.586 [2024-10-30 09:44:33.978320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.586 [2024-10-30 09:44:33.978332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.586 [2024-10-30 09:44:33.978340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.586 [2024-10-30 09:44:33.980101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.586 [2024-10-30 09:44:33.980222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.586 BaseBdev2 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 BaseBdev3_malloc 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 true 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 [2024-10-30 09:44:34.029440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.586 [2024-10-30 09:44:34.029575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.586 [2024-10-30 09:44:34.029595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:55.586 [2024-10-30 09:44:34.029603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.586 [2024-10-30 09:44:34.031405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.586 [2024-10-30 09:44:34.031434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.586 BaseBdev3 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 BaseBdev4_malloc 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 true 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 [2024-10-30 09:44:34.068646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:55.586 [2024-10-30 09:44:34.068764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.586 [2024-10-30 09:44:34.068782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:55.586 [2024-10-30 09:44:34.068792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.587 [2024-10-30 09:44:34.070539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.587 [2024-10-30 09:44:34.070567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:55.587 BaseBdev4 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.587 [2024-10-30 09:44:34.076701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.587 [2024-10-30 09:44:34.078253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.587 [2024-10-30 09:44:34.078314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.587 [2024-10-30 09:44:34.078369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.587 [2024-10-30 09:44:34.078553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:55.587 [2024-10-30 09:44:34.078564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.587 [2024-10-30 09:44:34.078754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.587 [2024-10-30 09:44:34.078873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:55.587 [2024-10-30 09:44:34.078880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:55.587 [2024-10-30 09:44:34.078993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.587 "name": "raid_bdev1", 00:09:55.587 "uuid": "03d7cceb-ea6d-4342-b923-d5c389236fde", 00:09:55.587 "strip_size_kb": 0, 00:09:55.587 "state": "online", 00:09:55.587 "raid_level": "raid1", 00:09:55.587 "superblock": true, 00:09:55.587 "num_base_bdevs": 4, 00:09:55.587 "num_base_bdevs_discovered": 4, 00:09:55.587 "num_base_bdevs_operational": 4, 00:09:55.587 "base_bdevs_list": [ 00:09:55.587 { 00:09:55.587 "name": "BaseBdev1", 00:09:55.587 "uuid": "10d89149-1da6-522b-a4d9-a54e0a11481f", 00:09:55.587 "is_configured": true, 00:09:55.587 "data_offset": 2048, 00:09:55.587 "data_size": 63488 00:09:55.587 }, 00:09:55.587 { 00:09:55.587 "name": "BaseBdev2", 00:09:55.587 "uuid": "c5c55c4c-e517-51d9-9051-fda39e2ac8cd", 00:09:55.587 "is_configured": true, 00:09:55.587 "data_offset": 2048, 00:09:55.587 "data_size": 63488 00:09:55.587 }, 00:09:55.587 { 00:09:55.587 "name": "BaseBdev3", 00:09:55.587 "uuid": "edf63822-83a9-5d0e-b8bc-d3ed6c409039", 00:09:55.587 "is_configured": true, 00:09:55.587 "data_offset": 2048, 00:09:55.587 "data_size": 63488 00:09:55.587 }, 00:09:55.587 { 00:09:55.587 "name": "BaseBdev4", 00:09:55.587 "uuid": "afcc493b-5173-5973-a9f8-b8698c0ca3e7", 00:09:55.587 "is_configured": true, 00:09:55.587 "data_offset": 2048, 00:09:55.587 "data_size": 63488 00:09:55.587 } 00:09:55.587 ] 00:09:55.587 }' 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.587 09:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.869 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.869 09:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.869 [2024-10-30 09:44:34.473599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.805 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.063 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.063 "name": "raid_bdev1", 00:09:57.063 "uuid": "03d7cceb-ea6d-4342-b923-d5c389236fde", 00:09:57.063 "strip_size_kb": 0, 00:09:57.063 "state": "online", 00:09:57.063 "raid_level": "raid1", 00:09:57.063 "superblock": true, 00:09:57.063 "num_base_bdevs": 4, 00:09:57.063 "num_base_bdevs_discovered": 4, 00:09:57.063 "num_base_bdevs_operational": 4, 00:09:57.063 "base_bdevs_list": [ 00:09:57.063 { 00:09:57.063 "name": "BaseBdev1", 00:09:57.063 "uuid": "10d89149-1da6-522b-a4d9-a54e0a11481f", 00:09:57.063 "is_configured": true, 00:09:57.063 "data_offset": 2048, 00:09:57.064 "data_size": 63488 00:09:57.064 }, 00:09:57.064 { 00:09:57.064 "name": "BaseBdev2", 00:09:57.064 "uuid": "c5c55c4c-e517-51d9-9051-fda39e2ac8cd", 00:09:57.064 "is_configured": true, 00:09:57.064 "data_offset": 2048, 00:09:57.064 "data_size": 63488 00:09:57.064 }, 00:09:57.064 { 00:09:57.064 "name": "BaseBdev3", 00:09:57.064 "uuid": "edf63822-83a9-5d0e-b8bc-d3ed6c409039", 00:09:57.064 "is_configured": true, 00:09:57.064 "data_offset": 2048, 00:09:57.064 "data_size": 63488 00:09:57.064 }, 00:09:57.064 { 00:09:57.064 "name": "BaseBdev4", 00:09:57.064 "uuid": "afcc493b-5173-5973-a9f8-b8698c0ca3e7", 00:09:57.064 "is_configured": true, 00:09:57.064 "data_offset": 2048, 00:09:57.064 "data_size": 63488 00:09:57.064 } 00:09:57.064 ] 00:09:57.064 }' 00:09:57.064 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.064 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 [2024-10-30 09:44:35.711509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.321 [2024-10-30 09:44:35.711542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.321 [2024-10-30 09:44:35.714011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.321 [2024-10-30 09:44:35.714073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.321 [2024-10-30 09:44:35.714190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.321 [2024-10-30 09:44:35.714207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:57.321 { 00:09:57.321 "results": [ 00:09:57.321 { 00:09:57.321 "job": "raid_bdev1", 00:09:57.321 "core_mask": "0x1", 00:09:57.321 "workload": "randrw", 00:09:57.321 "percentage": 50, 00:09:57.321 "status": "finished", 00:09:57.321 "queue_depth": 1, 00:09:57.321 "io_size": 131072, 00:09:57.321 "runtime": 1.236403, 00:09:57.321 "iops": 14139.40276754424, 00:09:57.321 "mibps": 1767.42534594303, 00:09:57.321 "io_failed": 0, 00:09:57.321 "io_timeout": 0, 00:09:57.321 "avg_latency_us": 68.20851266797497, 00:09:57.321 "min_latency_us": 23.335384615384616, 00:09:57.321 "max_latency_us": 1443.0523076923077 00:09:57.321 } 00:09:57.321 ], 00:09:57.321 "core_count": 1 00:09:57.321 } 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73084 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73084 ']' 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73084 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73084 00:09:57.321 killing process with pid 73084 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73084' 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73084 00:09:57.321 [2024-10-30 09:44:35.739688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.321 09:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73084 00:09:57.321 [2024-10-30 09:44:35.899808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5DFhzev8mV 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:57.886 00:09:57.886 real 0m3.469s 00:09:57.886 user 0m4.122s 00:09:57.886 sys 0m0.371s 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.886 ************************************ 00:09:57.886 END TEST raid_read_error_test 00:09:57.886 ************************************ 00:09:57.886 09:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.144 09:44:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:09:58.144 09:44:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:58.144 09:44:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.144 09:44:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.144 ************************************ 00:09:58.144 START TEST raid_write_error_test 00:09:58.144 ************************************ 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lqTLAL6OfB 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73213 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73213 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73213 ']' 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.144 09:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:58.144 [2024-10-30 09:44:36.598898] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:09:58.144 [2024-10-30 09:44:36.598997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73213 ] 00:09:58.144 [2024-10-30 09:44:36.754561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.402 [2024-10-30 09:44:36.853369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.402 [2024-10-30 09:44:36.987374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.402 [2024-10-30 09:44:36.987419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 BaseBdev1_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 true 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 [2024-10-30 09:44:37.502202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.968 [2024-10-30 09:44:37.502255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.968 [2024-10-30 09:44:37.502273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:58.968 [2024-10-30 09:44:37.502284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.968 [2024-10-30 09:44:37.504373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.968 [2024-10-30 09:44:37.504411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.968 BaseBdev1 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 BaseBdev2_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 true 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.968 [2024-10-30 09:44:37.545781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.968 [2024-10-30 09:44:37.545825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.968 [2024-10-30 09:44:37.545839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.968 [2024-10-30 09:44:37.545849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.968 [2024-10-30 09:44:37.547903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.968 [2024-10-30 09:44:37.547938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.968 BaseBdev2 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.968 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.226 BaseBdev3_malloc 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.226 true 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.226 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.226 [2024-10-30 09:44:37.602890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.226 [2024-10-30 09:44:37.602939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.226 [2024-10-30 09:44:37.602955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:59.227 [2024-10-30 09:44:37.602965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.227 [2024-10-30 09:44:37.605070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.227 [2024-10-30 09:44:37.605104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.227 BaseBdev3 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.227 BaseBdev4_malloc 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.227 true 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.227 [2024-10-30 09:44:37.646647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:59.227 [2024-10-30 09:44:37.646697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.227 [2024-10-30 09:44:37.646713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:59.227 [2024-10-30 09:44:37.646724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.227 [2024-10-30 09:44:37.648766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.227 [2024-10-30 09:44:37.648803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:59.227 BaseBdev4 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.227 [2024-10-30 09:44:37.654716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.227 [2024-10-30 09:44:37.656523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.227 [2024-10-30 09:44:37.656600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.227 [2024-10-30 09:44:37.656673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.227 [2024-10-30 09:44:37.656907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:59.227 [2024-10-30 09:44:37.656925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.227 [2024-10-30 09:44:37.657176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.227 [2024-10-30 09:44:37.657331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:59.227 [2024-10-30 09:44:37.657346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:59.227 [2024-10-30 09:44:37.657481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.227 "name": "raid_bdev1", 00:09:59.227 "uuid": "ee2f2043-21b0-4b1c-a162-458158f068ce", 00:09:59.227 "strip_size_kb": 0, 00:09:59.227 "state": "online", 00:09:59.227 "raid_level": "raid1", 00:09:59.227 "superblock": true, 00:09:59.227 "num_base_bdevs": 4, 00:09:59.227 "num_base_bdevs_discovered": 4, 00:09:59.227 "num_base_bdevs_operational": 4, 00:09:59.227 "base_bdevs_list": [ 00:09:59.227 { 00:09:59.227 "name": "BaseBdev1", 00:09:59.227 "uuid": "c102a286-5ed7-574d-8d0b-127f3fe84b38", 00:09:59.227 "is_configured": true, 00:09:59.227 "data_offset": 2048, 00:09:59.227 "data_size": 63488 00:09:59.227 }, 00:09:59.227 { 00:09:59.227 "name": "BaseBdev2", 00:09:59.227 "uuid": "28be3000-42b8-549d-b36e-518aea4e1a77", 00:09:59.227 "is_configured": true, 00:09:59.227 "data_offset": 2048, 00:09:59.227 "data_size": 63488 00:09:59.227 }, 00:09:59.227 { 00:09:59.227 "name": "BaseBdev3", 00:09:59.227 "uuid": "5a0b82fb-b5b8-5516-be46-860482b71ec3", 00:09:59.227 "is_configured": true, 00:09:59.227 "data_offset": 2048, 00:09:59.227 "data_size": 63488 00:09:59.227 }, 00:09:59.227 { 00:09:59.227 "name": "BaseBdev4", 00:09:59.227 "uuid": "fee4678c-3283-5c1c-824b-25cf2006df4a", 00:09:59.227 "is_configured": true, 00:09:59.227 "data_offset": 2048, 00:09:59.227 "data_size": 63488 00:09:59.227 } 00:09:59.227 ] 00:09:59.227 }' 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.227 09:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.485 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:59.485 09:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:59.485 [2024-10-30 09:44:38.035718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.431 [2024-10-30 09:44:38.966299] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:00.431 [2024-10-30 09:44:38.966356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.431 [2024-10-30 09:44:38.966582] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.431 09:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.431 09:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.431 "name": "raid_bdev1", 00:10:00.431 "uuid": "ee2f2043-21b0-4b1c-a162-458158f068ce", 00:10:00.431 "strip_size_kb": 0, 00:10:00.431 "state": "online", 00:10:00.431 "raid_level": "raid1", 00:10:00.431 "superblock": true, 00:10:00.431 "num_base_bdevs": 4, 00:10:00.431 "num_base_bdevs_discovered": 3, 00:10:00.431 "num_base_bdevs_operational": 3, 00:10:00.431 "base_bdevs_list": [ 00:10:00.431 { 00:10:00.431 "name": null, 00:10:00.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.431 "is_configured": false, 00:10:00.431 "data_offset": 0, 00:10:00.431 "data_size": 63488 00:10:00.431 }, 00:10:00.431 { 00:10:00.431 "name": "BaseBdev2", 00:10:00.431 "uuid": "28be3000-42b8-549d-b36e-518aea4e1a77", 00:10:00.431 "is_configured": true, 00:10:00.431 "data_offset": 2048, 00:10:00.431 "data_size": 63488 00:10:00.431 }, 00:10:00.431 { 00:10:00.431 "name": "BaseBdev3", 00:10:00.431 "uuid": "5a0b82fb-b5b8-5516-be46-860482b71ec3", 00:10:00.431 "is_configured": true, 00:10:00.431 "data_offset": 2048, 00:10:00.431 "data_size": 63488 00:10:00.431 }, 00:10:00.431 { 00:10:00.431 "name": "BaseBdev4", 00:10:00.431 "uuid": "fee4678c-3283-5c1c-824b-25cf2006df4a", 00:10:00.431 "is_configured": true, 00:10:00.431 "data_offset": 2048, 00:10:00.431 "data_size": 63488 00:10:00.431 } 00:10:00.431 ] 00:10:00.431 }' 00:10:00.431 09:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.431 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.689 09:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.689 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.689 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.689 [2024-10-30 09:44:39.265908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.689 [2024-10-30 09:44:39.265940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.689 [2024-10-30 09:44:39.268914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.689 [2024-10-30 09:44:39.268960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.689 [2024-10-30 09:44:39.269082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.689 [2024-10-30 09:44:39.269094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:00.690 { 00:10:00.690 "results": [ 00:10:00.690 { 00:10:00.690 "job": "raid_bdev1", 00:10:00.690 "core_mask": "0x1", 00:10:00.690 "workload": "randrw", 00:10:00.690 "percentage": 50, 00:10:00.690 "status": "finished", 00:10:00.690 "queue_depth": 1, 00:10:00.690 "io_size": 131072, 00:10:00.690 "runtime": 1.228273, 00:10:00.690 "iops": 12580.265136496528, 00:10:00.690 "mibps": 1572.533142062066, 00:10:00.690 "io_failed": 0, 00:10:00.690 "io_timeout": 0, 00:10:00.690 "avg_latency_us": 76.32464843983354, 00:10:00.690 "min_latency_us": 29.53846153846154, 00:10:00.690 "max_latency_us": 1726.6215384615384 00:10:00.690 } 00:10:00.690 ], 00:10:00.690 "core_count": 1 00:10:00.690 } 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73213 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73213 ']' 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73213 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73213 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.690 killing process with pid 73213 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73213' 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73213 00:10:00.690 [2024-10-30 09:44:39.296253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.690 09:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73213 00:10:00.948 [2024-10-30 09:44:39.494672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lqTLAL6OfB 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:01.882 00:10:01.882 real 0m3.679s 00:10:01.882 user 0m4.346s 00:10:01.882 sys 0m0.379s 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.882 09:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.882 ************************************ 00:10:01.882 END TEST raid_write_error_test 00:10:01.882 ************************************ 00:10:01.882 09:44:40 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:01.882 09:44:40 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:01.882 09:44:40 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:01.882 09:44:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:01.882 09:44:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.882 09:44:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.882 ************************************ 00:10:01.882 START TEST raid_rebuild_test 00:10:01.882 ************************************ 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:01.882 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73346 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73346 00:10:01.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 73346 ']' 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.883 09:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.883 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:01.883 Zero copy mechanism will not be used. 00:10:01.883 [2024-10-30 09:44:40.328042] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:10:01.883 [2024-10-30 09:44:40.328181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73346 ] 00:10:01.883 [2024-10-30 09:44:40.486530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.142 [2024-10-30 09:44:40.585847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.142 [2024-10-30 09:44:40.720229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.142 [2024-10-30 09:44:40.720273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 BaseBdev1_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 [2024-10-30 09:44:41.209205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:02.712 [2024-10-30 09:44:41.209265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.712 [2024-10-30 09:44:41.209286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.712 [2024-10-30 09:44:41.209298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.712 [2024-10-30 09:44:41.211451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.712 [2024-10-30 09:44:41.211587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.712 BaseBdev1 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 BaseBdev2_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 [2024-10-30 09:44:41.245341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:02.712 [2024-10-30 09:44:41.245391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.712 [2024-10-30 09:44:41.245407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:02.712 [2024-10-30 09:44:41.245417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.712 [2024-10-30 09:44:41.247457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.712 [2024-10-30 09:44:41.247491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:02.712 BaseBdev2 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 spare_malloc 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 spare_delay 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 [2024-10-30 09:44:41.313739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:02.712 [2024-10-30 09:44:41.313791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.712 [2024-10-30 09:44:41.313808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:02.712 [2024-10-30 09:44:41.313818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.712 [2024-10-30 09:44:41.315967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.712 [2024-10-30 09:44:41.316004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:02.712 spare 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.712 [2024-10-30 09:44:41.321788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.712 [2024-10-30 09:44:41.323611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.712 [2024-10-30 09:44:41.323696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:02.712 [2024-10-30 09:44:41.323709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:02.712 [2024-10-30 09:44:41.323954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.712 [2024-10-30 09:44:41.324103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:02.712 [2024-10-30 09:44:41.324114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:02.712 [2024-10-30 09:44:41.324247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.712 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.713 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.972 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.972 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.972 "name": "raid_bdev1", 00:10:02.972 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:02.972 "strip_size_kb": 0, 00:10:02.972 "state": "online", 00:10:02.972 "raid_level": "raid1", 00:10:02.972 "superblock": false, 00:10:02.972 "num_base_bdevs": 2, 00:10:02.972 "num_base_bdevs_discovered": 2, 00:10:02.972 "num_base_bdevs_operational": 2, 00:10:02.972 "base_bdevs_list": [ 00:10:02.972 { 00:10:02.972 "name": "BaseBdev1", 00:10:02.972 "uuid": "cb0a7d51-a382-53a4-bdf6-6e1551e12117", 00:10:02.972 "is_configured": true, 00:10:02.972 "data_offset": 0, 00:10:02.972 "data_size": 65536 00:10:02.972 }, 00:10:02.972 { 00:10:02.972 "name": "BaseBdev2", 00:10:02.972 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:02.972 "is_configured": true, 00:10:02.972 "data_offset": 0, 00:10:02.972 "data_size": 65536 00:10:02.972 } 00:10:02.972 ] 00:10:02.972 }' 00:10:02.973 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.973 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.231 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.232 [2024-10-30 09:44:41.658174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:03.232 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:03.490 [2024-10-30 09:44:41.901971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:03.490 /dev/nbd0 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:03.490 1+0 records in 00:10:03.490 1+0 records out 00:10:03.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196084 s, 20.9 MB/s 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:03.490 09:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:08.809 65536+0 records in 00:10:08.809 65536+0 records out 00:10:08.809 33554432 bytes (34 MB, 32 MiB) copied, 4.69376 s, 7.1 MB/s 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:08.809 [2024-10-30 09:44:46.851428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 [2024-10-30 09:44:46.876228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.809 "name": "raid_bdev1", 00:10:08.809 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:08.809 "strip_size_kb": 0, 00:10:08.809 "state": "online", 00:10:08.809 "raid_level": "raid1", 00:10:08.809 "superblock": false, 00:10:08.809 "num_base_bdevs": 2, 00:10:08.809 "num_base_bdevs_discovered": 1, 00:10:08.809 "num_base_bdevs_operational": 1, 00:10:08.809 "base_bdevs_list": [ 00:10:08.809 { 00:10:08.809 "name": null, 00:10:08.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.809 "is_configured": false, 00:10:08.809 "data_offset": 0, 00:10:08.809 "data_size": 65536 00:10:08.809 }, 00:10:08.809 { 00:10:08.809 "name": "BaseBdev2", 00:10:08.809 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:08.809 "is_configured": true, 00:10:08.809 "data_offset": 0, 00:10:08.809 "data_size": 65536 00:10:08.809 } 00:10:08.809 ] 00:10:08.809 }' 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.809 09:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 09:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:08.809 09:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.809 09:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 [2024-10-30 09:44:47.184309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:08.809 [2024-10-30 09:44:47.193578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:10:08.809 09:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.809 09:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:08.809 [2024-10-30 09:44:47.195168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:09.742 "name": "raid_bdev1", 00:10:09.742 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:09.742 "strip_size_kb": 0, 00:10:09.742 "state": "online", 00:10:09.742 "raid_level": "raid1", 00:10:09.742 "superblock": false, 00:10:09.742 "num_base_bdevs": 2, 00:10:09.742 "num_base_bdevs_discovered": 2, 00:10:09.742 "num_base_bdevs_operational": 2, 00:10:09.742 "process": { 00:10:09.742 "type": "rebuild", 00:10:09.742 "target": "spare", 00:10:09.742 "progress": { 00:10:09.742 "blocks": 20480, 00:10:09.742 "percent": 31 00:10:09.742 } 00:10:09.742 }, 00:10:09.742 "base_bdevs_list": [ 00:10:09.742 { 00:10:09.742 "name": "spare", 00:10:09.742 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:09.742 "is_configured": true, 00:10:09.742 "data_offset": 0, 00:10:09.742 "data_size": 65536 00:10:09.742 }, 00:10:09.742 { 00:10:09.742 "name": "BaseBdev2", 00:10:09.742 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:09.742 "is_configured": true, 00:10:09.742 "data_offset": 0, 00:10:09.742 "data_size": 65536 00:10:09.742 } 00:10:09.742 ] 00:10:09.742 }' 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.742 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.742 [2024-10-30 09:44:48.305303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:09.999 [2024-10-30 09:44:48.400292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:10.000 [2024-10-30 09:44:48.400475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.000 [2024-10-30 09:44:48.400490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:10.000 [2024-10-30 09:44:48.400500] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.000 "name": "raid_bdev1", 00:10:10.000 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:10.000 "strip_size_kb": 0, 00:10:10.000 "state": "online", 00:10:10.000 "raid_level": "raid1", 00:10:10.000 "superblock": false, 00:10:10.000 "num_base_bdevs": 2, 00:10:10.000 "num_base_bdevs_discovered": 1, 00:10:10.000 "num_base_bdevs_operational": 1, 00:10:10.000 "base_bdevs_list": [ 00:10:10.000 { 00:10:10.000 "name": null, 00:10:10.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.000 "is_configured": false, 00:10:10.000 "data_offset": 0, 00:10:10.000 "data_size": 65536 00:10:10.000 }, 00:10:10.000 { 00:10:10.000 "name": "BaseBdev2", 00:10:10.000 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:10.000 "is_configured": true, 00:10:10.000 "data_offset": 0, 00:10:10.000 "data_size": 65536 00:10:10.000 } 00:10:10.000 ] 00:10:10.000 }' 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.000 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:10.257 "name": "raid_bdev1", 00:10:10.257 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:10.257 "strip_size_kb": 0, 00:10:10.257 "state": "online", 00:10:10.257 "raid_level": "raid1", 00:10:10.257 "superblock": false, 00:10:10.257 "num_base_bdevs": 2, 00:10:10.257 "num_base_bdevs_discovered": 1, 00:10:10.257 "num_base_bdevs_operational": 1, 00:10:10.257 "base_bdevs_list": [ 00:10:10.257 { 00:10:10.257 "name": null, 00:10:10.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.257 "is_configured": false, 00:10:10.257 "data_offset": 0, 00:10:10.257 "data_size": 65536 00:10:10.257 }, 00:10:10.257 { 00:10:10.257 "name": "BaseBdev2", 00:10:10.257 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:10.257 "is_configured": true, 00:10:10.257 "data_offset": 0, 00:10:10.257 "data_size": 65536 00:10:10.257 } 00:10:10.257 ] 00:10:10.257 }' 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.257 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.257 [2024-10-30 09:44:48.811337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:10.257 [2024-10-30 09:44:48.820482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:10:10.258 09:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.258 09:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:10.258 [2024-10-30 09:44:48.822048] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:11.630 "name": "raid_bdev1", 00:10:11.630 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:11.630 "strip_size_kb": 0, 00:10:11.630 "state": "online", 00:10:11.630 "raid_level": "raid1", 00:10:11.630 "superblock": false, 00:10:11.630 "num_base_bdevs": 2, 00:10:11.630 "num_base_bdevs_discovered": 2, 00:10:11.630 "num_base_bdevs_operational": 2, 00:10:11.630 "process": { 00:10:11.630 "type": "rebuild", 00:10:11.630 "target": "spare", 00:10:11.630 "progress": { 00:10:11.630 "blocks": 20480, 00:10:11.630 "percent": 31 00:10:11.630 } 00:10:11.630 }, 00:10:11.630 "base_bdevs_list": [ 00:10:11.630 { 00:10:11.630 "name": "spare", 00:10:11.630 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 0, 00:10:11.630 "data_size": 65536 00:10:11.630 }, 00:10:11.630 { 00:10:11.630 "name": "BaseBdev2", 00:10:11.630 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 0, 00:10:11.630 "data_size": 65536 00:10:11.630 } 00:10:11.630 ] 00:10:11.630 }' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=284 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:11.630 "name": "raid_bdev1", 00:10:11.630 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:11.630 "strip_size_kb": 0, 00:10:11.630 "state": "online", 00:10:11.630 "raid_level": "raid1", 00:10:11.630 "superblock": false, 00:10:11.630 "num_base_bdevs": 2, 00:10:11.630 "num_base_bdevs_discovered": 2, 00:10:11.630 "num_base_bdevs_operational": 2, 00:10:11.630 "process": { 00:10:11.630 "type": "rebuild", 00:10:11.630 "target": "spare", 00:10:11.630 "progress": { 00:10:11.630 "blocks": 22528, 00:10:11.630 "percent": 34 00:10:11.630 } 00:10:11.630 }, 00:10:11.630 "base_bdevs_list": [ 00:10:11.630 { 00:10:11.630 "name": "spare", 00:10:11.630 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 0, 00:10:11.630 "data_size": 65536 00:10:11.630 }, 00:10:11.630 { 00:10:11.630 "name": "BaseBdev2", 00:10:11.630 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 0, 00:10:11.630 "data_size": 65536 00:10:11.630 } 00:10:11.630 ] 00:10:11.630 }' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:11.630 09:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:11.630 09:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:11.630 09:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:12.574 "name": "raid_bdev1", 00:10:12.574 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:12.574 "strip_size_kb": 0, 00:10:12.574 "state": "online", 00:10:12.574 "raid_level": "raid1", 00:10:12.574 "superblock": false, 00:10:12.574 "num_base_bdevs": 2, 00:10:12.574 "num_base_bdevs_discovered": 2, 00:10:12.574 "num_base_bdevs_operational": 2, 00:10:12.574 "process": { 00:10:12.574 "type": "rebuild", 00:10:12.574 "target": "spare", 00:10:12.574 "progress": { 00:10:12.574 "blocks": 43008, 00:10:12.574 "percent": 65 00:10:12.574 } 00:10:12.574 }, 00:10:12.574 "base_bdevs_list": [ 00:10:12.574 { 00:10:12.574 "name": "spare", 00:10:12.574 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:12.574 "is_configured": true, 00:10:12.574 "data_offset": 0, 00:10:12.574 "data_size": 65536 00:10:12.574 }, 00:10:12.574 { 00:10:12.574 "name": "BaseBdev2", 00:10:12.574 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:12.574 "is_configured": true, 00:10:12.574 "data_offset": 0, 00:10:12.574 "data_size": 65536 00:10:12.574 } 00:10:12.574 ] 00:10:12.574 }' 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:12.574 09:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:13.507 [2024-10-30 09:44:52.035535] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:13.507 [2024-10-30 09:44:52.035600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:13.507 [2024-10-30 09:44:52.035643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.507 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:13.765 "name": "raid_bdev1", 00:10:13.765 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:13.765 "strip_size_kb": 0, 00:10:13.765 "state": "online", 00:10:13.765 "raid_level": "raid1", 00:10:13.765 "superblock": false, 00:10:13.765 "num_base_bdevs": 2, 00:10:13.765 "num_base_bdevs_discovered": 2, 00:10:13.765 "num_base_bdevs_operational": 2, 00:10:13.765 "base_bdevs_list": [ 00:10:13.765 { 00:10:13.765 "name": "spare", 00:10:13.765 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:13.765 "is_configured": true, 00:10:13.765 "data_offset": 0, 00:10:13.765 "data_size": 65536 00:10:13.765 }, 00:10:13.765 { 00:10:13.765 "name": "BaseBdev2", 00:10:13.765 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:13.765 "is_configured": true, 00:10:13.765 "data_offset": 0, 00:10:13.765 "data_size": 65536 00:10:13.765 } 00:10:13.765 ] 00:10:13.765 }' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:13.765 "name": "raid_bdev1", 00:10:13.765 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:13.765 "strip_size_kb": 0, 00:10:13.765 "state": "online", 00:10:13.765 "raid_level": "raid1", 00:10:13.765 "superblock": false, 00:10:13.765 "num_base_bdevs": 2, 00:10:13.765 "num_base_bdevs_discovered": 2, 00:10:13.765 "num_base_bdevs_operational": 2, 00:10:13.765 "base_bdevs_list": [ 00:10:13.765 { 00:10:13.765 "name": "spare", 00:10:13.765 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:13.765 "is_configured": true, 00:10:13.765 "data_offset": 0, 00:10:13.765 "data_size": 65536 00:10:13.765 }, 00:10:13.765 { 00:10:13.765 "name": "BaseBdev2", 00:10:13.765 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:13.765 "is_configured": true, 00:10:13.765 "data_offset": 0, 00:10:13.765 "data_size": 65536 00:10:13.765 } 00:10:13.765 ] 00:10:13.765 }' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.765 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.765 "name": "raid_bdev1", 00:10:13.765 "uuid": "8eeeb1e0-b84f-405a-b707-3a5e4747b137", 00:10:13.765 "strip_size_kb": 0, 00:10:13.765 "state": "online", 00:10:13.765 "raid_level": "raid1", 00:10:13.765 "superblock": false, 00:10:13.765 "num_base_bdevs": 2, 00:10:13.765 "num_base_bdevs_discovered": 2, 00:10:13.765 "num_base_bdevs_operational": 2, 00:10:13.765 "base_bdevs_list": [ 00:10:13.765 { 00:10:13.765 "name": "spare", 00:10:13.765 "uuid": "753ef713-3097-5648-a106-98477cdfe387", 00:10:13.765 "is_configured": true, 00:10:13.765 "data_offset": 0, 00:10:13.766 "data_size": 65536 00:10:13.766 }, 00:10:13.766 { 00:10:13.766 "name": "BaseBdev2", 00:10:13.766 "uuid": "19a406bd-d01a-5d29-a340-f08a41b0b940", 00:10:13.766 "is_configured": true, 00:10:13.766 "data_offset": 0, 00:10:13.766 "data_size": 65536 00:10:13.766 } 00:10:13.766 ] 00:10:13.766 }' 00:10:13.766 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.766 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.024 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.025 [2024-10-30 09:44:52.630254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.025 [2024-10-30 09:44:52.630277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.025 [2024-10-30 09:44:52.630340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.025 [2024-10-30 09:44:52.630393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.025 [2024-10-30 09:44:52.630401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:14.025 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:14.283 /dev/nbd0 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:14.283 1+0 records in 00:10:14.283 1+0 records out 00:10:14.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376398 s, 10.9 MB/s 00:10:14.283 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:14.541 09:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:14.541 /dev/nbd1 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:14.541 1+0 records in 00:10:14.541 1+0 records out 00:10:14.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223033 s, 18.4 MB/s 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:14.541 09:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.799 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.057 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73346 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 73346 ']' 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 73346 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73346 00:10:15.315 killing process with pid 73346 00:10:15.315 Received shutdown signal, test time was about 60.000000 seconds 00:10:15.315 00:10:15.315 Latency(us) 00:10:15.315 [2024-10-30T09:44:53.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.315 [2024-10-30T09:44:53.935Z] =================================================================================================================== 00:10:15.315 [2024-10-30T09:44:53.935Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73346' 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 73346 00:10:15.315 [2024-10-30 09:44:53.720716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.315 09:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 73346 00:10:15.316 [2024-10-30 09:44:53.868969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.882 ************************************ 00:10:15.882 END TEST raid_rebuild_test 00:10:15.882 ************************************ 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:15.882 00:10:15.882 real 0m14.170s 00:10:15.882 user 0m15.565s 00:10:15.882 sys 0m2.768s 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 09:44:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:15.882 09:44:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:15.882 09:44:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.882 09:44:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 ************************************ 00:10:15.882 START TEST raid_rebuild_test_sb 00:10:15.882 ************************************ 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:15.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73757 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73757 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73757 ']' 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.882 09:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.140 [2024-10-30 09:44:54.541777] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:10:16.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:16.140 Zero copy mechanism will not be used. 00:10:16.140 [2024-10-30 09:44:54.542020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73757 ] 00:10:16.140 [2024-10-30 09:44:54.695984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.397 [2024-10-30 09:44:54.778456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.397 [2024-10-30 09:44:54.888487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.397 [2024-10-30 09:44:54.888527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 BaseBdev1_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 [2024-10-30 09:44:55.414181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:16.964 [2024-10-30 09:44:55.414236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.964 [2024-10-30 09:44:55.414253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.964 [2024-10-30 09:44:55.414262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.964 [2024-10-30 09:44:55.416008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.964 [2024-10-30 09:44:55.416165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.964 BaseBdev1 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 BaseBdev2_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 [2024-10-30 09:44:55.445515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:16.964 [2024-10-30 09:44:55.445558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.964 [2024-10-30 09:44:55.445572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.964 [2024-10-30 09:44:55.445582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.964 [2024-10-30 09:44:55.447292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.964 [2024-10-30 09:44:55.447415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.964 BaseBdev2 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 spare_malloc 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 spare_delay 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.964 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.964 [2024-10-30 09:44:55.497585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:16.964 [2024-10-30 09:44:55.497630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.965 [2024-10-30 09:44:55.497644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:16.965 [2024-10-30 09:44:55.497653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.965 [2024-10-30 09:44:55.499415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.965 [2024-10-30 09:44:55.499446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:16.965 spare 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.965 [2024-10-30 09:44:55.505640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.965 [2024-10-30 09:44:55.507136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.965 [2024-10-30 09:44:55.507265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.965 [2024-10-30 09:44:55.507277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.965 [2024-10-30 09:44:55.507476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.965 [2024-10-30 09:44:55.507596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.965 [2024-10-30 09:44:55.507603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.965 [2024-10-30 09:44:55.507712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.965 "name": "raid_bdev1", 00:10:16.965 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:16.965 "strip_size_kb": 0, 00:10:16.965 "state": "online", 00:10:16.965 "raid_level": "raid1", 00:10:16.965 "superblock": true, 00:10:16.965 "num_base_bdevs": 2, 00:10:16.965 "num_base_bdevs_discovered": 2, 00:10:16.965 "num_base_bdevs_operational": 2, 00:10:16.965 "base_bdevs_list": [ 00:10:16.965 { 00:10:16.965 "name": "BaseBdev1", 00:10:16.965 "uuid": "dca3404d-a24d-5c13-891f-f4fd38d9ebcd", 00:10:16.965 "is_configured": true, 00:10:16.965 "data_offset": 2048, 00:10:16.965 "data_size": 63488 00:10:16.965 }, 00:10:16.965 { 00:10:16.965 "name": "BaseBdev2", 00:10:16.965 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:16.965 "is_configured": true, 00:10:16.965 "data_offset": 2048, 00:10:16.965 "data_size": 63488 00:10:16.965 } 00:10:16.965 ] 00:10:16.965 }' 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.965 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.222 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.222 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:17.222 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.222 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.222 [2024-10-30 09:44:55.817930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.222 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:17.481 09:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:17.481 [2024-10-30 09:44:56.065767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:17.481 /dev/nbd0 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:17.481 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.738 1+0 records in 00:10:17.738 1+0 records out 00:10:17.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620485 s, 6.6 MB/s 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:17.738 09:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:21.952 63488+0 records in 00:10:21.952 63488+0 records out 00:10:21.952 32505856 bytes (33 MB, 31 MiB) copied, 3.65284 s, 8.9 MB/s 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:21.952 [2024-10-30 09:44:59.982177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.952 [2024-10-30 09:44:59.990811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.952 09:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.952 "name": "raid_bdev1", 00:10:21.952 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:21.952 "strip_size_kb": 0, 00:10:21.952 "state": "online", 00:10:21.952 "raid_level": "raid1", 00:10:21.952 "superblock": true, 00:10:21.952 "num_base_bdevs": 2, 00:10:21.952 "num_base_bdevs_discovered": 1, 00:10:21.952 "num_base_bdevs_operational": 1, 00:10:21.952 "base_bdevs_list": [ 00:10:21.952 { 00:10:21.952 "name": null, 00:10:21.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.952 "is_configured": false, 00:10:21.952 "data_offset": 0, 00:10:21.952 "data_size": 63488 00:10:21.952 }, 00:10:21.952 { 00:10:21.952 "name": "BaseBdev2", 00:10:21.952 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:21.952 "is_configured": true, 00:10:21.952 "data_offset": 2048, 00:10:21.952 "data_size": 63488 00:10:21.952 } 00:10:21.952 ] 00:10:21.952 }' 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.952 [2024-10-30 09:45:00.314887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:21.952 [2024-10-30 09:45:00.324169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.952 09:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:21.952 [2024-10-30 09:45:00.325801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.883 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:22.883 "name": "raid_bdev1", 00:10:22.883 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:22.883 "strip_size_kb": 0, 00:10:22.883 "state": "online", 00:10:22.883 "raid_level": "raid1", 00:10:22.883 "superblock": true, 00:10:22.883 "num_base_bdevs": 2, 00:10:22.883 "num_base_bdevs_discovered": 2, 00:10:22.883 "num_base_bdevs_operational": 2, 00:10:22.883 "process": { 00:10:22.883 "type": "rebuild", 00:10:22.883 "target": "spare", 00:10:22.883 "progress": { 00:10:22.883 "blocks": 20480, 00:10:22.883 "percent": 32 00:10:22.883 } 00:10:22.883 }, 00:10:22.883 "base_bdevs_list": [ 00:10:22.883 { 00:10:22.883 "name": "spare", 00:10:22.883 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:22.883 "is_configured": true, 00:10:22.883 "data_offset": 2048, 00:10:22.883 "data_size": 63488 00:10:22.883 }, 00:10:22.883 { 00:10:22.883 "name": "BaseBdev2", 00:10:22.883 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:22.883 "is_configured": true, 00:10:22.883 "data_offset": 2048, 00:10:22.883 "data_size": 63488 00:10:22.883 } 00:10:22.883 ] 00:10:22.883 }' 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.884 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.884 [2024-10-30 09:45:01.436006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:23.193 [2024-10-30 09:45:01.531091] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:23.193 [2024-10-30 09:45:01.531164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.193 [2024-10-30 09:45:01.531176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:23.193 [2024-10-30 09:45:01.531184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.193 "name": "raid_bdev1", 00:10:23.193 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:23.193 "strip_size_kb": 0, 00:10:23.193 "state": "online", 00:10:23.193 "raid_level": "raid1", 00:10:23.193 "superblock": true, 00:10:23.193 "num_base_bdevs": 2, 00:10:23.193 "num_base_bdevs_discovered": 1, 00:10:23.193 "num_base_bdevs_operational": 1, 00:10:23.193 "base_bdevs_list": [ 00:10:23.193 { 00:10:23.193 "name": null, 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.193 "is_configured": false, 00:10:23.193 "data_offset": 0, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "BaseBdev2", 00:10:23.193 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 } 00:10:23.193 ] 00:10:23.193 }' 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.193 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:23.472 "name": "raid_bdev1", 00:10:23.472 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:23.472 "strip_size_kb": 0, 00:10:23.472 "state": "online", 00:10:23.472 "raid_level": "raid1", 00:10:23.472 "superblock": true, 00:10:23.472 "num_base_bdevs": 2, 00:10:23.472 "num_base_bdevs_discovered": 1, 00:10:23.472 "num_base_bdevs_operational": 1, 00:10:23.472 "base_bdevs_list": [ 00:10:23.472 { 00:10:23.472 "name": null, 00:10:23.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.472 "is_configured": false, 00:10:23.472 "data_offset": 0, 00:10:23.472 "data_size": 63488 00:10:23.472 }, 00:10:23.472 { 00:10:23.472 "name": "BaseBdev2", 00:10:23.472 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:23.472 "is_configured": true, 00:10:23.472 "data_offset": 2048, 00:10:23.472 "data_size": 63488 00:10:23.472 } 00:10:23.472 ] 00:10:23.472 }' 00:10:23.472 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.473 [2024-10-30 09:45:01.953981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:23.473 [2024-10-30 09:45:01.962928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.473 09:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:23.473 [2024-10-30 09:45:01.964466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.407 09:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.407 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:24.407 "name": "raid_bdev1", 00:10:24.407 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:24.407 "strip_size_kb": 0, 00:10:24.407 "state": "online", 00:10:24.407 "raid_level": "raid1", 00:10:24.407 "superblock": true, 00:10:24.407 "num_base_bdevs": 2, 00:10:24.407 "num_base_bdevs_discovered": 2, 00:10:24.407 "num_base_bdevs_operational": 2, 00:10:24.407 "process": { 00:10:24.407 "type": "rebuild", 00:10:24.407 "target": "spare", 00:10:24.407 "progress": { 00:10:24.407 "blocks": 20480, 00:10:24.407 "percent": 32 00:10:24.407 } 00:10:24.407 }, 00:10:24.407 "base_bdevs_list": [ 00:10:24.407 { 00:10:24.407 "name": "spare", 00:10:24.407 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:24.407 "is_configured": true, 00:10:24.407 "data_offset": 2048, 00:10:24.407 "data_size": 63488 00:10:24.407 }, 00:10:24.407 { 00:10:24.407 "name": "BaseBdev2", 00:10:24.407 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:24.407 "is_configured": true, 00:10:24.407 "data_offset": 2048, 00:10:24.407 "data_size": 63488 00:10:24.407 } 00:10:24.407 ] 00:10:24.407 }' 00:10:24.407 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:24.665 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:24.665 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:24.665 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:24.665 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:24.666 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=298 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:24.666 "name": "raid_bdev1", 00:10:24.666 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:24.666 "strip_size_kb": 0, 00:10:24.666 "state": "online", 00:10:24.666 "raid_level": "raid1", 00:10:24.666 "superblock": true, 00:10:24.666 "num_base_bdevs": 2, 00:10:24.666 "num_base_bdevs_discovered": 2, 00:10:24.666 "num_base_bdevs_operational": 2, 00:10:24.666 "process": { 00:10:24.666 "type": "rebuild", 00:10:24.666 "target": "spare", 00:10:24.666 "progress": { 00:10:24.666 "blocks": 22528, 00:10:24.666 "percent": 35 00:10:24.666 } 00:10:24.666 }, 00:10:24.666 "base_bdevs_list": [ 00:10:24.666 { 00:10:24.666 "name": "spare", 00:10:24.666 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:24.666 "is_configured": true, 00:10:24.666 "data_offset": 2048, 00:10:24.666 "data_size": 63488 00:10:24.666 }, 00:10:24.666 { 00:10:24.666 "name": "BaseBdev2", 00:10:24.666 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:24.666 "is_configured": true, 00:10:24.666 "data_offset": 2048, 00:10:24.666 "data_size": 63488 00:10:24.666 } 00:10:24.666 ] 00:10:24.666 }' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:24.666 09:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:25.599 "name": "raid_bdev1", 00:10:25.599 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:25.599 "strip_size_kb": 0, 00:10:25.599 "state": "online", 00:10:25.599 "raid_level": "raid1", 00:10:25.599 "superblock": true, 00:10:25.599 "num_base_bdevs": 2, 00:10:25.599 "num_base_bdevs_discovered": 2, 00:10:25.599 "num_base_bdevs_operational": 2, 00:10:25.599 "process": { 00:10:25.599 "type": "rebuild", 00:10:25.599 "target": "spare", 00:10:25.599 "progress": { 00:10:25.599 "blocks": 43008, 00:10:25.599 "percent": 67 00:10:25.599 } 00:10:25.599 }, 00:10:25.599 "base_bdevs_list": [ 00:10:25.599 { 00:10:25.599 "name": "spare", 00:10:25.599 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:25.599 "is_configured": true, 00:10:25.599 "data_offset": 2048, 00:10:25.599 "data_size": 63488 00:10:25.599 }, 00:10:25.599 { 00:10:25.599 "name": "BaseBdev2", 00:10:25.599 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:25.599 "is_configured": true, 00:10:25.599 "data_offset": 2048, 00:10:25.599 "data_size": 63488 00:10:25.599 } 00:10:25.599 ] 00:10:25.599 }' 00:10:25.599 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:25.857 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:25.857 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:25.857 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:25.857 09:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:26.790 [2024-10-30 09:45:05.077705] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:26.790 [2024-10-30 09:45:05.077928] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:26.790 [2024-10-30 09:45:05.078029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.790 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:26.790 "name": "raid_bdev1", 00:10:26.790 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:26.790 "strip_size_kb": 0, 00:10:26.790 "state": "online", 00:10:26.790 "raid_level": "raid1", 00:10:26.790 "superblock": true, 00:10:26.791 "num_base_bdevs": 2, 00:10:26.791 "num_base_bdevs_discovered": 2, 00:10:26.791 "num_base_bdevs_operational": 2, 00:10:26.791 "base_bdevs_list": [ 00:10:26.791 { 00:10:26.791 "name": "spare", 00:10:26.791 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:26.791 "is_configured": true, 00:10:26.791 "data_offset": 2048, 00:10:26.791 "data_size": 63488 00:10:26.791 }, 00:10:26.791 { 00:10:26.791 "name": "BaseBdev2", 00:10:26.791 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:26.791 "is_configured": true, 00:10:26.791 "data_offset": 2048, 00:10:26.791 "data_size": 63488 00:10:26.791 } 00:10:26.791 ] 00:10:26.791 }' 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.791 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:27.049 "name": "raid_bdev1", 00:10:27.049 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:27.049 "strip_size_kb": 0, 00:10:27.049 "state": "online", 00:10:27.049 "raid_level": "raid1", 00:10:27.049 "superblock": true, 00:10:27.049 "num_base_bdevs": 2, 00:10:27.049 "num_base_bdevs_discovered": 2, 00:10:27.049 "num_base_bdevs_operational": 2, 00:10:27.049 "base_bdevs_list": [ 00:10:27.049 { 00:10:27.049 "name": "spare", 00:10:27.049 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:27.049 "is_configured": true, 00:10:27.049 "data_offset": 2048, 00:10:27.049 "data_size": 63488 00:10:27.049 }, 00:10:27.049 { 00:10:27.049 "name": "BaseBdev2", 00:10:27.049 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:27.049 "is_configured": true, 00:10:27.049 "data_offset": 2048, 00:10:27.049 "data_size": 63488 00:10:27.049 } 00:10:27.049 ] 00:10:27.049 }' 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.049 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.050 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.050 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.050 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.050 "name": "raid_bdev1", 00:10:27.050 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:27.050 "strip_size_kb": 0, 00:10:27.050 "state": "online", 00:10:27.050 "raid_level": "raid1", 00:10:27.050 "superblock": true, 00:10:27.050 "num_base_bdevs": 2, 00:10:27.050 "num_base_bdevs_discovered": 2, 00:10:27.050 "num_base_bdevs_operational": 2, 00:10:27.050 "base_bdevs_list": [ 00:10:27.050 { 00:10:27.050 "name": "spare", 00:10:27.050 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:27.050 "is_configured": true, 00:10:27.050 "data_offset": 2048, 00:10:27.050 "data_size": 63488 00:10:27.050 }, 00:10:27.050 { 00:10:27.050 "name": "BaseBdev2", 00:10:27.050 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:27.050 "is_configured": true, 00:10:27.050 "data_offset": 2048, 00:10:27.050 "data_size": 63488 00:10:27.050 } 00:10:27.050 ] 00:10:27.050 }' 00:10:27.050 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.050 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 [2024-10-30 09:45:05.800197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.330 [2024-10-30 09:45:05.800369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.330 [2024-10-30 09:45:05.800435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.330 [2024-10-30 09:45:05.800489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.330 [2024-10-30 09:45:05.800497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.330 09:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:27.635 /dev/nbd0 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.635 1+0 records in 00:10:27.635 1+0 records out 00:10:27.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255605 s, 16.0 MB/s 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.635 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:27.894 /dev/nbd1 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.894 1+0 records in 00:10:27.894 1+0 records out 00:10:27.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259768 s, 15.8 MB/s 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.894 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.152 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.411 [2024-10-30 09:45:06.845431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:28.411 [2024-10-30 09:45:06.845477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.411 [2024-10-30 09:45:06.845495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:28.411 [2024-10-30 09:45:06.845504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.411 [2024-10-30 09:45:06.847379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.411 [2024-10-30 09:45:06.847410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:28.411 [2024-10-30 09:45:06.847488] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:28.411 [2024-10-30 09:45:06.847529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:28.411 [2024-10-30 09:45:06.847642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.411 spare 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.411 [2024-10-30 09:45:06.947723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:28.411 [2024-10-30 09:45:06.947757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.411 [2024-10-30 09:45:06.948014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:10:28.411 [2024-10-30 09:45:06.948170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:28.411 [2024-10-30 09:45:06.948185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:28.411 [2024-10-30 09:45:06.948321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.411 "name": "raid_bdev1", 00:10:28.411 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:28.411 "strip_size_kb": 0, 00:10:28.411 "state": "online", 00:10:28.411 "raid_level": "raid1", 00:10:28.411 "superblock": true, 00:10:28.411 "num_base_bdevs": 2, 00:10:28.411 "num_base_bdevs_discovered": 2, 00:10:28.411 "num_base_bdevs_operational": 2, 00:10:28.411 "base_bdevs_list": [ 00:10:28.411 { 00:10:28.411 "name": "spare", 00:10:28.411 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:28.411 "is_configured": true, 00:10:28.411 "data_offset": 2048, 00:10:28.411 "data_size": 63488 00:10:28.411 }, 00:10:28.411 { 00:10:28.411 "name": "BaseBdev2", 00:10:28.411 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:28.411 "is_configured": true, 00:10:28.411 "data_offset": 2048, 00:10:28.411 "data_size": 63488 00:10:28.411 } 00:10:28.411 ] 00:10:28.411 }' 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.411 09:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.927 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:28.927 "name": "raid_bdev1", 00:10:28.927 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:28.927 "strip_size_kb": 0, 00:10:28.927 "state": "online", 00:10:28.927 "raid_level": "raid1", 00:10:28.927 "superblock": true, 00:10:28.927 "num_base_bdevs": 2, 00:10:28.927 "num_base_bdevs_discovered": 2, 00:10:28.927 "num_base_bdevs_operational": 2, 00:10:28.927 "base_bdevs_list": [ 00:10:28.927 { 00:10:28.927 "name": "spare", 00:10:28.927 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:28.927 "is_configured": true, 00:10:28.927 "data_offset": 2048, 00:10:28.927 "data_size": 63488 00:10:28.927 }, 00:10:28.927 { 00:10:28.927 "name": "BaseBdev2", 00:10:28.927 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:28.927 "is_configured": true, 00:10:28.927 "data_offset": 2048, 00:10:28.927 "data_size": 63488 00:10:28.927 } 00:10:28.927 ] 00:10:28.927 }' 00:10:28.927 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:28.927 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:28.927 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 [2024-10-30 09:45:07.401574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.928 "name": "raid_bdev1", 00:10:28.928 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:28.928 "strip_size_kb": 0, 00:10:28.928 "state": "online", 00:10:28.928 "raid_level": "raid1", 00:10:28.928 "superblock": true, 00:10:28.928 "num_base_bdevs": 2, 00:10:28.928 "num_base_bdevs_discovered": 1, 00:10:28.928 "num_base_bdevs_operational": 1, 00:10:28.928 "base_bdevs_list": [ 00:10:28.928 { 00:10:28.928 "name": null, 00:10:28.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.928 "is_configured": false, 00:10:28.928 "data_offset": 0, 00:10:28.928 "data_size": 63488 00:10:28.928 }, 00:10:28.928 { 00:10:28.928 "name": "BaseBdev2", 00:10:28.928 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:28.928 "is_configured": true, 00:10:28.928 "data_offset": 2048, 00:10:28.928 "data_size": 63488 00:10:28.928 } 00:10:28.928 ] 00:10:28.928 }' 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.928 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.185 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:29.185 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.185 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.185 [2024-10-30 09:45:07.737651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:29.185 [2024-10-30 09:45:07.737793] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:29.185 [2024-10-30 09:45:07.737806] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:29.185 [2024-10-30 09:45:07.737835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:29.185 [2024-10-30 09:45:07.746868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:10:29.185 09:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.185 09:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:29.185 [2024-10-30 09:45:07.748449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.557 "name": "raid_bdev1", 00:10:30.557 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:30.557 "strip_size_kb": 0, 00:10:30.557 "state": "online", 00:10:30.557 "raid_level": "raid1", 00:10:30.557 "superblock": true, 00:10:30.557 "num_base_bdevs": 2, 00:10:30.557 "num_base_bdevs_discovered": 2, 00:10:30.557 "num_base_bdevs_operational": 2, 00:10:30.557 "process": { 00:10:30.557 "type": "rebuild", 00:10:30.557 "target": "spare", 00:10:30.557 "progress": { 00:10:30.557 "blocks": 20480, 00:10:30.557 "percent": 32 00:10:30.557 } 00:10:30.557 }, 00:10:30.557 "base_bdevs_list": [ 00:10:30.557 { 00:10:30.557 "name": "spare", 00:10:30.557 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:30.557 "is_configured": true, 00:10:30.557 "data_offset": 2048, 00:10:30.557 "data_size": 63488 00:10:30.557 }, 00:10:30.557 { 00:10:30.557 "name": "BaseBdev2", 00:10:30.557 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:30.557 "is_configured": true, 00:10:30.557 "data_offset": 2048, 00:10:30.557 "data_size": 63488 00:10:30.557 } 00:10:30.557 ] 00:10:30.557 }' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.557 [2024-10-30 09:45:08.854650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:30.557 [2024-10-30 09:45:08.953793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:30.557 [2024-10-30 09:45:08.953865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.557 [2024-10-30 09:45:08.953878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:30.557 [2024-10-30 09:45:08.953886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.557 "name": "raid_bdev1", 00:10:30.557 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:30.557 "strip_size_kb": 0, 00:10:30.557 "state": "online", 00:10:30.557 "raid_level": "raid1", 00:10:30.557 "superblock": true, 00:10:30.557 "num_base_bdevs": 2, 00:10:30.557 "num_base_bdevs_discovered": 1, 00:10:30.557 "num_base_bdevs_operational": 1, 00:10:30.557 "base_bdevs_list": [ 00:10:30.557 { 00:10:30.557 "name": null, 00:10:30.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.557 "is_configured": false, 00:10:30.557 "data_offset": 0, 00:10:30.557 "data_size": 63488 00:10:30.557 }, 00:10:30.557 { 00:10:30.557 "name": "BaseBdev2", 00:10:30.557 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:30.557 "is_configured": true, 00:10:30.557 "data_offset": 2048, 00:10:30.557 "data_size": 63488 00:10:30.557 } 00:10:30.557 ] 00:10:30.557 }' 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.557 09:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.815 09:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:30.815 09:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.815 09:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.815 [2024-10-30 09:45:09.296174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:30.815 [2024-10-30 09:45:09.296230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.815 [2024-10-30 09:45:09.296248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:30.815 [2024-10-30 09:45:09.296257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.815 [2024-10-30 09:45:09.296618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.815 [2024-10-30 09:45:09.296649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:30.815 [2024-10-30 09:45:09.296719] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:30.815 [2024-10-30 09:45:09.296734] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:30.815 [2024-10-30 09:45:09.296743] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:30.815 [2024-10-30 09:45:09.296763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:30.815 [2024-10-30 09:45:09.305691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:10:30.815 spare 00:10:30.815 09:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.816 09:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:30.816 [2024-10-30 09:45:09.307272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.746 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:31.746 "name": "raid_bdev1", 00:10:31.746 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:31.746 "strip_size_kb": 0, 00:10:31.746 "state": "online", 00:10:31.746 "raid_level": "raid1", 00:10:31.746 "superblock": true, 00:10:31.746 "num_base_bdevs": 2, 00:10:31.746 "num_base_bdevs_discovered": 2, 00:10:31.746 "num_base_bdevs_operational": 2, 00:10:31.746 "process": { 00:10:31.746 "type": "rebuild", 00:10:31.746 "target": "spare", 00:10:31.746 "progress": { 00:10:31.746 "blocks": 20480, 00:10:31.746 "percent": 32 00:10:31.747 } 00:10:31.747 }, 00:10:31.747 "base_bdevs_list": [ 00:10:31.747 { 00:10:31.747 "name": "spare", 00:10:31.747 "uuid": "f2ece599-cf6c-5bf7-96ae-1a631c5e9b6d", 00:10:31.747 "is_configured": true, 00:10:31.747 "data_offset": 2048, 00:10:31.747 "data_size": 63488 00:10:31.747 }, 00:10:31.747 { 00:10:31.747 "name": "BaseBdev2", 00:10:31.747 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:31.747 "is_configured": true, 00:10:31.747 "data_offset": 2048, 00:10:31.747 "data_size": 63488 00:10:31.747 } 00:10:31.747 ] 00:10:31.747 }' 00:10:31.747 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.004 [2024-10-30 09:45:10.409438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:32.004 [2024-10-30 09:45:10.411954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:32.004 [2024-10-30 09:45:10.412001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.004 [2024-10-30 09:45:10.412015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:32.004 [2024-10-30 09:45:10.412020] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.004 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.005 "name": "raid_bdev1", 00:10:32.005 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:32.005 "strip_size_kb": 0, 00:10:32.005 "state": "online", 00:10:32.005 "raid_level": "raid1", 00:10:32.005 "superblock": true, 00:10:32.005 "num_base_bdevs": 2, 00:10:32.005 "num_base_bdevs_discovered": 1, 00:10:32.005 "num_base_bdevs_operational": 1, 00:10:32.005 "base_bdevs_list": [ 00:10:32.005 { 00:10:32.005 "name": null, 00:10:32.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.005 "is_configured": false, 00:10:32.005 "data_offset": 0, 00:10:32.005 "data_size": 63488 00:10:32.005 }, 00:10:32.005 { 00:10:32.005 "name": "BaseBdev2", 00:10:32.005 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:32.005 "is_configured": true, 00:10:32.005 "data_offset": 2048, 00:10:32.005 "data_size": 63488 00:10:32.005 } 00:10:32.005 ] 00:10:32.005 }' 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.005 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.262 "name": "raid_bdev1", 00:10:32.262 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:32.262 "strip_size_kb": 0, 00:10:32.262 "state": "online", 00:10:32.262 "raid_level": "raid1", 00:10:32.262 "superblock": true, 00:10:32.262 "num_base_bdevs": 2, 00:10:32.262 "num_base_bdevs_discovered": 1, 00:10:32.262 "num_base_bdevs_operational": 1, 00:10:32.262 "base_bdevs_list": [ 00:10:32.262 { 00:10:32.262 "name": null, 00:10:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.262 "is_configured": false, 00:10:32.262 "data_offset": 0, 00:10:32.262 "data_size": 63488 00:10:32.262 }, 00:10:32.262 { 00:10:32.262 "name": "BaseBdev2", 00:10:32.262 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:32.262 "is_configured": true, 00:10:32.262 "data_offset": 2048, 00:10:32.262 "data_size": 63488 00:10:32.262 } 00:10:32.262 ] 00:10:32.262 }' 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.262 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:32.263 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.263 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.263 [2024-10-30 09:45:10.858653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:32.263 [2024-10-30 09:45:10.858703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.263 [2024-10-30 09:45:10.858721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:32.263 [2024-10-30 09:45:10.858730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.263 [2024-10-30 09:45:10.859100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.263 [2024-10-30 09:45:10.859123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.263 [2024-10-30 09:45:10.859192] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:32.263 [2024-10-30 09:45:10.859207] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:32.263 [2024-10-30 09:45:10.859215] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:32.263 [2024-10-30 09:45:10.859222] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:32.263 BaseBdev1 00:10:32.263 09:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.263 09:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.634 "name": "raid_bdev1", 00:10:33.634 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:33.634 "strip_size_kb": 0, 00:10:33.634 "state": "online", 00:10:33.634 "raid_level": "raid1", 00:10:33.634 "superblock": true, 00:10:33.634 "num_base_bdevs": 2, 00:10:33.634 "num_base_bdevs_discovered": 1, 00:10:33.634 "num_base_bdevs_operational": 1, 00:10:33.634 "base_bdevs_list": [ 00:10:33.634 { 00:10:33.634 "name": null, 00:10:33.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.634 "is_configured": false, 00:10:33.634 "data_offset": 0, 00:10:33.634 "data_size": 63488 00:10:33.634 }, 00:10:33.634 { 00:10:33.634 "name": "BaseBdev2", 00:10:33.634 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:33.634 "is_configured": true, 00:10:33.634 "data_offset": 2048, 00:10:33.634 "data_size": 63488 00:10:33.634 } 00:10:33.634 ] 00:10:33.634 }' 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.634 09:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:33.634 "name": "raid_bdev1", 00:10:33.634 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:33.634 "strip_size_kb": 0, 00:10:33.634 "state": "online", 00:10:33.634 "raid_level": "raid1", 00:10:33.634 "superblock": true, 00:10:33.634 "num_base_bdevs": 2, 00:10:33.634 "num_base_bdevs_discovered": 1, 00:10:33.634 "num_base_bdevs_operational": 1, 00:10:33.634 "base_bdevs_list": [ 00:10:33.634 { 00:10:33.634 "name": null, 00:10:33.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.634 "is_configured": false, 00:10:33.634 "data_offset": 0, 00:10:33.634 "data_size": 63488 00:10:33.634 }, 00:10:33.634 { 00:10:33.634 "name": "BaseBdev2", 00:10:33.634 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:33.634 "is_configured": true, 00:10:33.634 "data_offset": 2048, 00:10:33.634 "data_size": 63488 00:10:33.634 } 00:10:33.634 ] 00:10:33.634 }' 00:10:33.634 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.892 [2024-10-30 09:45:12.302964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.892 [2024-10-30 09:45:12.303090] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:33.892 [2024-10-30 09:45:12.303106] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:33.892 request: 00:10:33.892 { 00:10:33.892 "base_bdev": "BaseBdev1", 00:10:33.892 "raid_bdev": "raid_bdev1", 00:10:33.892 "method": "bdev_raid_add_base_bdev", 00:10:33.892 "req_id": 1 00:10:33.892 } 00:10:33.892 Got JSON-RPC error response 00:10:33.892 response: 00:10:33.892 { 00:10:33.892 "code": -22, 00:10:33.892 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:33.892 } 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.892 09:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.824 "name": "raid_bdev1", 00:10:34.824 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:34.824 "strip_size_kb": 0, 00:10:34.824 "state": "online", 00:10:34.824 "raid_level": "raid1", 00:10:34.824 "superblock": true, 00:10:34.824 "num_base_bdevs": 2, 00:10:34.824 "num_base_bdevs_discovered": 1, 00:10:34.824 "num_base_bdevs_operational": 1, 00:10:34.824 "base_bdevs_list": [ 00:10:34.824 { 00:10:34.824 "name": null, 00:10:34.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.824 "is_configured": false, 00:10:34.824 "data_offset": 0, 00:10:34.824 "data_size": 63488 00:10:34.824 }, 00:10:34.824 { 00:10:34.824 "name": "BaseBdev2", 00:10:34.824 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:34.824 "is_configured": true, 00:10:34.824 "data_offset": 2048, 00:10:34.824 "data_size": 63488 00:10:34.824 } 00:10:34.824 ] 00:10:34.824 }' 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.824 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.080 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.081 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.081 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.081 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:35.081 "name": "raid_bdev1", 00:10:35.081 "uuid": "b69b42e6-aae9-48d2-bbee-7cf1f1157853", 00:10:35.081 "strip_size_kb": 0, 00:10:35.081 "state": "online", 00:10:35.081 "raid_level": "raid1", 00:10:35.081 "superblock": true, 00:10:35.081 "num_base_bdevs": 2, 00:10:35.081 "num_base_bdevs_discovered": 1, 00:10:35.081 "num_base_bdevs_operational": 1, 00:10:35.081 "base_bdevs_list": [ 00:10:35.081 { 00:10:35.081 "name": null, 00:10:35.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.081 "is_configured": false, 00:10:35.081 "data_offset": 0, 00:10:35.081 "data_size": 63488 00:10:35.081 }, 00:10:35.081 { 00:10:35.081 "name": "BaseBdev2", 00:10:35.081 "uuid": "ee11728c-180c-56db-92c1-5899e4323b2a", 00:10:35.081 "is_configured": true, 00:10:35.081 "data_offset": 2048, 00:10:35.081 "data_size": 63488 00:10:35.081 } 00:10:35.081 ] 00:10:35.081 }' 00:10:35.081 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73757 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73757 ']' 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 73757 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73757 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.338 killing process with pid 73757 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73757' 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 73757 00:10:35.338 Received shutdown signal, test time was about 60.000000 seconds 00:10:35.338 00:10:35.338 Latency(us) 00:10:35.338 [2024-10-30T09:45:13.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.338 [2024-10-30T09:45:13.958Z] =================================================================================================================== 00:10:35.338 [2024-10-30T09:45:13.958Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:35.338 [2024-10-30 09:45:13.778317] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.338 09:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 73757 00:10:35.338 [2024-10-30 09:45:13.778419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.338 [2024-10-30 09:45:13.778458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.338 [2024-10-30 09:45:13.778466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:35.338 [2024-10-30 09:45:13.924919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.903 09:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:10:35.903 00:10:35.903 real 0m20.011s 00:10:35.903 user 0m23.921s 00:10:35.903 sys 0m2.792s 00:10:35.903 09:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.903 09:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.903 ************************************ 00:10:35.903 END TEST raid_rebuild_test_sb 00:10:35.903 ************************************ 00:10:35.903 09:45:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:10:35.903 09:45:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:35.903 09:45:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.903 09:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.161 ************************************ 00:10:36.161 START TEST raid_rebuild_test_io 00:10:36.161 ************************************ 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74458 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74458 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 74458 ']' 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.161 09:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:36.161 [2024-10-30 09:45:14.586772] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:10:36.161 [2024-10-30 09:45:14.586892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74458 ] 00:10:36.161 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:36.161 Zero copy mechanism will not be used. 00:10:36.161 [2024-10-30 09:45:14.745421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.419 [2024-10-30 09:45:14.830806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.419 [2024-10-30 09:45:14.941215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.419 [2024-10-30 09:45:14.941255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 BaseBdev1_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 [2024-10-30 09:45:15.456803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:36.986 [2024-10-30 09:45:15.456879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.986 [2024-10-30 09:45:15.456898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.986 [2024-10-30 09:45:15.456908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.986 [2024-10-30 09:45:15.458721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.986 [2024-10-30 09:45:15.458759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.986 BaseBdev1 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 BaseBdev2_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 [2024-10-30 09:45:15.488370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:36.986 [2024-10-30 09:45:15.488418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.986 [2024-10-30 09:45:15.488432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.986 [2024-10-30 09:45:15.488440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.986 [2024-10-30 09:45:15.490153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.986 [2024-10-30 09:45:15.490185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.986 BaseBdev2 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 spare_malloc 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 spare_delay 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 [2024-10-30 09:45:15.543557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:36.986 [2024-10-30 09:45:15.543605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.986 [2024-10-30 09:45:15.543621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:36.986 [2024-10-30 09:45:15.543630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.986 [2024-10-30 09:45:15.545395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.986 [2024-10-30 09:45:15.545426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:36.986 spare 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 [2024-10-30 09:45:15.551597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.986 [2024-10-30 09:45:15.553143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.986 [2024-10-30 09:45:15.553215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:36.986 [2024-10-30 09:45:15.553226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:36.986 [2024-10-30 09:45:15.553436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.986 [2024-10-30 09:45:15.553549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:36.986 [2024-10-30 09:45:15.553558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:36.986 [2024-10-30 09:45:15.553670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.986 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.986 "name": "raid_bdev1", 00:10:36.986 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:36.986 "strip_size_kb": 0, 00:10:36.986 "state": "online", 00:10:36.986 "raid_level": "raid1", 00:10:36.986 "superblock": false, 00:10:36.986 "num_base_bdevs": 2, 00:10:36.986 "num_base_bdevs_discovered": 2, 00:10:36.986 "num_base_bdevs_operational": 2, 00:10:36.986 "base_bdevs_list": [ 00:10:36.986 { 00:10:36.986 "name": "BaseBdev1", 00:10:36.987 "uuid": "632b69e1-1a79-5300-9dc8-069a26d1aaba", 00:10:36.987 "is_configured": true, 00:10:36.987 "data_offset": 0, 00:10:36.987 "data_size": 65536 00:10:36.987 }, 00:10:36.987 { 00:10:36.987 "name": "BaseBdev2", 00:10:36.987 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:36.987 "is_configured": true, 00:10:36.987 "data_offset": 0, 00:10:36.987 "data_size": 65536 00:10:36.987 } 00:10:36.987 ] 00:10:36.987 }' 00:10:36.987 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.987 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.246 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.246 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:37.246 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.246 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.503 [2024-10-30 09:45:15.867915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.503 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.503 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:37.503 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:37.503 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.503 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.504 [2024-10-30 09:45:15.943646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.504 "name": "raid_bdev1", 00:10:37.504 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:37.504 "strip_size_kb": 0, 00:10:37.504 "state": "online", 00:10:37.504 "raid_level": "raid1", 00:10:37.504 "superblock": false, 00:10:37.504 "num_base_bdevs": 2, 00:10:37.504 "num_base_bdevs_discovered": 1, 00:10:37.504 "num_base_bdevs_operational": 1, 00:10:37.504 "base_bdevs_list": [ 00:10:37.504 { 00:10:37.504 "name": null, 00:10:37.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.504 "is_configured": false, 00:10:37.504 "data_offset": 0, 00:10:37.504 "data_size": 65536 00:10:37.504 }, 00:10:37.504 { 00:10:37.504 "name": "BaseBdev2", 00:10:37.504 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:37.504 "is_configured": true, 00:10:37.504 "data_offset": 0, 00:10:37.504 "data_size": 65536 00:10:37.504 } 00:10:37.504 ] 00:10:37.504 }' 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.504 09:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.504 [2024-10-30 09:45:16.024100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:37.504 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:37.504 Zero copy mechanism will not be used. 00:10:37.504 Running I/O for 60 seconds... 00:10:37.761 09:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:37.761 09:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.761 09:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.761 [2024-10-30 09:45:16.256708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:37.761 09:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.761 09:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:37.761 [2024-10-30 09:45:16.301249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:37.761 [2024-10-30 09:45:16.302863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:38.019 [2024-10-30 09:45:16.404741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:38.019 [2024-10-30 09:45:16.405116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:38.019 [2024-10-30 09:45:16.617873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:38.019 [2024-10-30 09:45:16.618086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:38.583 [2024-10-30 09:45:16.941819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:38.583 191.00 IOPS, 573.00 MiB/s [2024-10-30T09:45:17.203Z] [2024-10-30 09:45:17.160205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:38.583 [2024-10-30 09:45:17.160452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.842 "name": "raid_bdev1", 00:10:38.842 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:38.842 "strip_size_kb": 0, 00:10:38.842 "state": "online", 00:10:38.842 "raid_level": "raid1", 00:10:38.842 "superblock": false, 00:10:38.842 "num_base_bdevs": 2, 00:10:38.842 "num_base_bdevs_discovered": 2, 00:10:38.842 "num_base_bdevs_operational": 2, 00:10:38.842 "process": { 00:10:38.842 "type": "rebuild", 00:10:38.842 "target": "spare", 00:10:38.842 "progress": { 00:10:38.842 "blocks": 10240, 00:10:38.842 "percent": 15 00:10:38.842 } 00:10:38.842 }, 00:10:38.842 "base_bdevs_list": [ 00:10:38.842 { 00:10:38.842 "name": "spare", 00:10:38.842 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:38.842 "is_configured": true, 00:10:38.842 "data_offset": 0, 00:10:38.842 "data_size": 65536 00:10:38.842 }, 00:10:38.842 { 00:10:38.842 "name": "BaseBdev2", 00:10:38.842 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:38.842 "is_configured": true, 00:10:38.842 "data_offset": 0, 00:10:38.842 "data_size": 65536 00:10:38.842 } 00:10:38.842 ] 00:10:38.842 }' 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.842 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:38.842 [2024-10-30 09:45:17.375714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:39.100 [2024-10-30 09:45:17.581541] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:39.100 [2024-10-30 09:45:17.593655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.100 [2024-10-30 09:45:17.593693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:39.100 [2024-10-30 09:45:17.593703] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:39.100 [2024-10-30 09:45:17.625364] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.100 "name": "raid_bdev1", 00:10:39.100 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:39.100 "strip_size_kb": 0, 00:10:39.100 "state": "online", 00:10:39.100 "raid_level": "raid1", 00:10:39.100 "superblock": false, 00:10:39.100 "num_base_bdevs": 2, 00:10:39.100 "num_base_bdevs_discovered": 1, 00:10:39.100 "num_base_bdevs_operational": 1, 00:10:39.100 "base_bdevs_list": [ 00:10:39.100 { 00:10:39.100 "name": null, 00:10:39.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.100 "is_configured": false, 00:10:39.100 "data_offset": 0, 00:10:39.100 "data_size": 65536 00:10:39.100 }, 00:10:39.100 { 00:10:39.100 "name": "BaseBdev2", 00:10:39.100 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:39.100 "is_configured": true, 00:10:39.100 "data_offset": 0, 00:10:39.100 "data_size": 65536 00:10:39.100 } 00:10:39.100 ] 00:10:39.100 }' 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.100 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.358 09:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.615 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.615 "name": "raid_bdev1", 00:10:39.615 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:39.615 "strip_size_kb": 0, 00:10:39.615 "state": "online", 00:10:39.615 "raid_level": "raid1", 00:10:39.615 "superblock": false, 00:10:39.615 "num_base_bdevs": 2, 00:10:39.615 "num_base_bdevs_discovered": 1, 00:10:39.615 "num_base_bdevs_operational": 1, 00:10:39.615 "base_bdevs_list": [ 00:10:39.615 { 00:10:39.615 "name": null, 00:10:39.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.615 "is_configured": false, 00:10:39.615 "data_offset": 0, 00:10:39.615 "data_size": 65536 00:10:39.615 }, 00:10:39.615 { 00:10:39.615 "name": "BaseBdev2", 00:10:39.615 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:39.615 "is_configured": true, 00:10:39.615 "data_offset": 0, 00:10:39.615 "data_size": 65536 00:10:39.615 } 00:10:39.615 ] 00:10:39.615 }' 00:10:39.615 09:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.615 175.50 IOPS, 526.50 MiB/s [2024-10-30T09:45:18.235Z] [2024-10-30 09:45:18.041142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.615 09:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:39.615 [2024-10-30 09:45:18.104945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:39.615 [2024-10-30 09:45:18.106554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:39.615 [2024-10-30 09:45:18.213258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:39.615 [2024-10-30 09:45:18.213646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:39.873 [2024-10-30 09:45:18.421302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:39.873 [2024-10-30 09:45:18.421527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:40.131 [2024-10-30 09:45:18.748397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:40.388 [2024-10-30 09:45:18.949624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:40.388 [2024-10-30 09:45:18.949856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:40.647 151.33 IOPS, 454.00 MiB/s [2024-10-30T09:45:19.267Z] 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.647 "name": "raid_bdev1", 00:10:40.647 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:40.647 "strip_size_kb": 0, 00:10:40.647 "state": "online", 00:10:40.647 "raid_level": "raid1", 00:10:40.647 "superblock": false, 00:10:40.647 "num_base_bdevs": 2, 00:10:40.647 "num_base_bdevs_discovered": 2, 00:10:40.647 "num_base_bdevs_operational": 2, 00:10:40.647 "process": { 00:10:40.647 "type": "rebuild", 00:10:40.647 "target": "spare", 00:10:40.647 "progress": { 00:10:40.647 "blocks": 12288, 00:10:40.647 "percent": 18 00:10:40.647 } 00:10:40.647 }, 00:10:40.647 "base_bdevs_list": [ 00:10:40.647 { 00:10:40.647 "name": "spare", 00:10:40.647 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:40.647 "is_configured": true, 00:10:40.647 "data_offset": 0, 00:10:40.647 "data_size": 65536 00:10:40.647 }, 00:10:40.647 { 00:10:40.647 "name": "BaseBdev2", 00:10:40.647 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:40.647 "is_configured": true, 00:10:40.647 "data_offset": 0, 00:10:40.647 "data_size": 65536 00:10:40.647 } 00:10:40.647 ] 00:10:40.647 }' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.647 [2024-10-30 09:45:19.168544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:40.647 [2024-10-30 09:45:19.168923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=314 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.647 "name": "raid_bdev1", 00:10:40.647 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:40.647 "strip_size_kb": 0, 00:10:40.647 "state": "online", 00:10:40.647 "raid_level": "raid1", 00:10:40.647 "superblock": false, 00:10:40.647 "num_base_bdevs": 2, 00:10:40.647 "num_base_bdevs_discovered": 2, 00:10:40.647 "num_base_bdevs_operational": 2, 00:10:40.647 "process": { 00:10:40.647 "type": "rebuild", 00:10:40.647 "target": "spare", 00:10:40.647 "progress": { 00:10:40.647 "blocks": 14336, 00:10:40.647 "percent": 21 00:10:40.647 } 00:10:40.647 }, 00:10:40.647 "base_bdevs_list": [ 00:10:40.647 { 00:10:40.647 "name": "spare", 00:10:40.647 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:40.647 "is_configured": true, 00:10:40.647 "data_offset": 0, 00:10:40.647 "data_size": 65536 00:10:40.647 }, 00:10:40.647 { 00:10:40.647 "name": "BaseBdev2", 00:10:40.647 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:40.647 "is_configured": true, 00:10:40.647 "data_offset": 0, 00:10:40.647 "data_size": 65536 00:10:40.647 } 00:10:40.647 ] 00:10:40.647 }' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:40.647 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.906 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:40.906 09:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:40.906 [2024-10-30 09:45:19.394252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:41.163 [2024-10-30 09:45:19.702962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:10:41.678 126.00 IOPS, 378.00 MiB/s [2024-10-30T09:45:20.298Z] [2024-10-30 09:45:20.139968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:41.678 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:41.936 "name": "raid_bdev1", 00:10:41.936 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:41.936 "strip_size_kb": 0, 00:10:41.936 "state": "online", 00:10:41.936 "raid_level": "raid1", 00:10:41.936 "superblock": false, 00:10:41.936 "num_base_bdevs": 2, 00:10:41.936 "num_base_bdevs_discovered": 2, 00:10:41.936 "num_base_bdevs_operational": 2, 00:10:41.936 "process": { 00:10:41.936 "type": "rebuild", 00:10:41.936 "target": "spare", 00:10:41.936 "progress": { 00:10:41.936 "blocks": 28672, 00:10:41.936 "percent": 43 00:10:41.936 } 00:10:41.936 }, 00:10:41.936 "base_bdevs_list": [ 00:10:41.936 { 00:10:41.936 "name": "spare", 00:10:41.936 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:41.936 "is_configured": true, 00:10:41.936 "data_offset": 0, 00:10:41.936 "data_size": 65536 00:10:41.936 }, 00:10:41.936 { 00:10:41.936 "name": "BaseBdev2", 00:10:41.936 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:41.936 "is_configured": true, 00:10:41.936 "data_offset": 0, 00:10:41.936 "data_size": 65536 00:10:41.936 } 00:10:41.936 ] 00:10:41.936 }' 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:41.936 09:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:41.936 [2024-10-30 09:45:20.466015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:10:42.194 [2024-10-30 09:45:20.678466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:10:42.194 [2024-10-30 09:45:20.678700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:10:42.710 110.20 IOPS, 330.60 MiB/s [2024-10-30T09:45:21.330Z] [2024-10-30 09:45:21.139627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:42.967 "name": "raid_bdev1", 00:10:42.967 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:42.967 "strip_size_kb": 0, 00:10:42.967 "state": "online", 00:10:42.967 "raid_level": "raid1", 00:10:42.967 "superblock": false, 00:10:42.967 "num_base_bdevs": 2, 00:10:42.967 "num_base_bdevs_discovered": 2, 00:10:42.967 "num_base_bdevs_operational": 2, 00:10:42.967 "process": { 00:10:42.967 "type": "rebuild", 00:10:42.967 "target": "spare", 00:10:42.967 "progress": { 00:10:42.967 "blocks": 43008, 00:10:42.967 "percent": 65 00:10:42.967 } 00:10:42.967 }, 00:10:42.967 "base_bdevs_list": [ 00:10:42.967 { 00:10:42.967 "name": "spare", 00:10:42.967 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:42.967 "is_configured": true, 00:10:42.967 "data_offset": 0, 00:10:42.967 "data_size": 65536 00:10:42.967 }, 00:10:42.967 { 00:10:42.967 "name": "BaseBdev2", 00:10:42.967 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:42.967 "is_configured": true, 00:10:42.967 "data_offset": 0, 00:10:42.967 "data_size": 65536 00:10:42.967 } 00:10:42.967 ] 00:10:42.967 }' 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:42.967 09:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:43.225 [2024-10-30 09:45:21.789237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:10:43.483 [2024-10-30 09:45:21.897144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:10:44.048 100.00 IOPS, 300.00 MiB/s [2024-10-30T09:45:22.668Z] 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.048 "name": "raid_bdev1", 00:10:44.048 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:44.048 "strip_size_kb": 0, 00:10:44.048 "state": "online", 00:10:44.048 "raid_level": "raid1", 00:10:44.048 "superblock": false, 00:10:44.048 "num_base_bdevs": 2, 00:10:44.048 "num_base_bdevs_discovered": 2, 00:10:44.048 "num_base_bdevs_operational": 2, 00:10:44.048 "process": { 00:10:44.048 "type": "rebuild", 00:10:44.048 "target": "spare", 00:10:44.048 "progress": { 00:10:44.048 "blocks": 61440, 00:10:44.048 "percent": 93 00:10:44.048 } 00:10:44.048 }, 00:10:44.048 "base_bdevs_list": [ 00:10:44.048 { 00:10:44.048 "name": "spare", 00:10:44.048 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:44.048 "is_configured": true, 00:10:44.048 "data_offset": 0, 00:10:44.048 "data_size": 65536 00:10:44.048 }, 00:10:44.048 { 00:10:44.048 "name": "BaseBdev2", 00:10:44.048 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:44.048 "is_configured": true, 00:10:44.048 "data_offset": 0, 00:10:44.048 "data_size": 65536 00:10:44.048 } 00:10:44.048 ] 00:10:44.048 }' 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:44.048 09:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:44.048 [2024-10-30 09:45:22.646906] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:44.306 [2024-10-30 09:45:22.751992] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:44.306 [2024-10-30 09:45:22.753694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.128 90.14 IOPS, 270.43 MiB/s [2024-10-30T09:45:23.748Z] 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:45.128 "name": "raid_bdev1", 00:10:45.128 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:45.128 "strip_size_kb": 0, 00:10:45.128 "state": "online", 00:10:45.128 "raid_level": "raid1", 00:10:45.128 "superblock": false, 00:10:45.128 "num_base_bdevs": 2, 00:10:45.128 "num_base_bdevs_discovered": 2, 00:10:45.128 "num_base_bdevs_operational": 2, 00:10:45.128 "base_bdevs_list": [ 00:10:45.128 { 00:10:45.128 "name": "spare", 00:10:45.128 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:45.128 "is_configured": true, 00:10:45.128 "data_offset": 0, 00:10:45.128 "data_size": 65536 00:10:45.128 }, 00:10:45.128 { 00:10:45.128 "name": "BaseBdev2", 00:10:45.128 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:45.128 "is_configured": true, 00:10:45.128 "data_offset": 0, 00:10:45.128 "data_size": 65536 00:10:45.128 } 00:10:45.128 ] 00:10:45.128 }' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:45.128 "name": "raid_bdev1", 00:10:45.128 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:45.128 "strip_size_kb": 0, 00:10:45.128 "state": "online", 00:10:45.128 "raid_level": "raid1", 00:10:45.128 "superblock": false, 00:10:45.128 "num_base_bdevs": 2, 00:10:45.128 "num_base_bdevs_discovered": 2, 00:10:45.128 "num_base_bdevs_operational": 2, 00:10:45.128 "base_bdevs_list": [ 00:10:45.128 { 00:10:45.128 "name": "spare", 00:10:45.128 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:45.128 "is_configured": true, 00:10:45.128 "data_offset": 0, 00:10:45.128 "data_size": 65536 00:10:45.128 }, 00:10:45.128 { 00:10:45.128 "name": "BaseBdev2", 00:10:45.128 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:45.128 "is_configured": true, 00:10:45.128 "data_offset": 0, 00:10:45.128 "data_size": 65536 00:10:45.128 } 00:10:45.128 ] 00:10:45.128 }' 00:10:45.128 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.386 "name": "raid_bdev1", 00:10:45.386 "uuid": "5106a092-c18a-46cc-ae62-abd26157860c", 00:10:45.386 "strip_size_kb": 0, 00:10:45.386 "state": "online", 00:10:45.386 "raid_level": "raid1", 00:10:45.386 "superblock": false, 00:10:45.386 "num_base_bdevs": 2, 00:10:45.386 "num_base_bdevs_discovered": 2, 00:10:45.386 "num_base_bdevs_operational": 2, 00:10:45.386 "base_bdevs_list": [ 00:10:45.386 { 00:10:45.386 "name": "spare", 00:10:45.386 "uuid": "6953e14f-a08e-5c31-931f-1977f86c52d4", 00:10:45.386 "is_configured": true, 00:10:45.386 "data_offset": 0, 00:10:45.386 "data_size": 65536 00:10:45.386 }, 00:10:45.386 { 00:10:45.386 "name": "BaseBdev2", 00:10:45.386 "uuid": "660cfb9a-f875-5e53-8eee-4cfad7873d0c", 00:10:45.386 "is_configured": true, 00:10:45.386 "data_offset": 0, 00:10:45.386 "data_size": 65536 00:10:45.386 } 00:10:45.386 ] 00:10:45.386 }' 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.386 09:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.644 83.50 IOPS, 250.50 MiB/s [2024-10-30T09:45:24.264Z] 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.644 [2024-10-30 09:45:24.098810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.644 [2024-10-30 09:45:24.098836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.644 00:10:45.644 Latency(us) 00:10:45.644 [2024-10-30T09:45:24.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.644 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:45.644 raid_bdev1 : 8.11 82.58 247.75 0.00 0.00 16031.35 259.94 108083.99 00:10:45.644 [2024-10-30T09:45:24.264Z] =================================================================================================================== 00:10:45.644 [2024-10-30T09:45:24.264Z] Total : 82.58 247.75 0.00 0.00 16031.35 259.94 108083.99 00:10:45.644 [2024-10-30 09:45:24.150710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.644 { 00:10:45.644 "results": [ 00:10:45.644 { 00:10:45.644 "job": "raid_bdev1", 00:10:45.644 "core_mask": "0x1", 00:10:45.644 "workload": "randrw", 00:10:45.644 "percentage": 50, 00:10:45.644 "status": "finished", 00:10:45.644 "queue_depth": 2, 00:10:45.644 "io_size": 3145728, 00:10:45.644 "runtime": 8.112994, 00:10:45.644 "iops": 82.58356902519587, 00:10:45.644 "mibps": 247.75070707558763, 00:10:45.644 "io_failed": 0, 00:10:45.644 "io_timeout": 0, 00:10:45.644 "avg_latency_us": 16031.348978185992, 00:10:45.644 "min_latency_us": 259.9384615384615, 00:10:45.644 "max_latency_us": 108083.9876923077 00:10:45.644 } 00:10:45.644 ], 00:10:45.644 "core_count": 1 00:10:45.644 } 00:10:45.644 [2024-10-30 09:45:24.150852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.644 [2024-10-30 09:45:24.150926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.644 [2024-10-30 09:45:24.150936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:45.644 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:45.902 /dev/nbd0 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.902 1+0 records in 00:10:45.902 1+0 records out 00:10:45.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330436 s, 12.4 MB/s 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:45.902 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:46.161 /dev/nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.161 1+0 records in 00:10:46.161 1+0 records out 00:10:46.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019789 s, 20.7 MB/s 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.161 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:46.418 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:46.418 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.419 09:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74458 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 74458 ']' 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 74458 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74458 00:10:46.677 killing process with pid 74458 00:10:46.677 Received shutdown signal, test time was about 9.176099 seconds 00:10:46.677 00:10:46.677 Latency(us) 00:10:46.677 [2024-10-30T09:45:25.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.677 [2024-10-30T09:45:25.297Z] =================================================================================================================== 00:10:46.677 [2024-10-30T09:45:25.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74458' 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 74458 00:10:46.677 [2024-10-30 09:45:25.201878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.677 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 74458 00:10:46.934 [2024-10-30 09:45:25.315445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.499 ************************************ 00:10:47.499 END TEST raid_rebuild_test_io 00:10:47.499 ************************************ 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:10:47.499 00:10:47.499 real 0m11.379s 00:10:47.499 user 0m13.964s 00:10:47.499 sys 0m0.960s 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:47.499 09:45:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:10:47.499 09:45:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:47.499 09:45:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.499 09:45:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.499 ************************************ 00:10:47.499 START TEST raid_rebuild_test_sb_io 00:10:47.499 ************************************ 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:47.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74832 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74832 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 74832 ']' 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:47.499 09:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:47.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:47.499 Zero copy mechanism will not be used. 00:10:47.499 [2024-10-30 09:45:26.007474] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:10:47.499 [2024-10-30 09:45:26.007585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74832 ] 00:10:47.757 [2024-10-30 09:45:26.163030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.757 [2024-10-30 09:45:26.246259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.757 [2024-10-30 09:45:26.355778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.757 [2024-10-30 09:45:26.355809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.322 BaseBdev1_malloc 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.322 [2024-10-30 09:45:26.880261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:48.322 [2024-10-30 09:45:26.880316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.322 [2024-10-30 09:45:26.880332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:48.322 [2024-10-30 09:45:26.880342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.322 [2024-10-30 09:45:26.882099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.322 [2024-10-30 09:45:26.882131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.322 BaseBdev1 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.322 BaseBdev2_malloc 00:10:48.322 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.323 [2024-10-30 09:45:26.911406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:48.323 [2024-10-30 09:45:26.911447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.323 [2024-10-30 09:45:26.911460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:48.323 [2024-10-30 09:45:26.911469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.323 [2024-10-30 09:45:26.913135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.323 [2024-10-30 09:45:26.913263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.323 BaseBdev2 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.323 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.580 spare_malloc 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.580 spare_delay 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.580 [2024-10-30 09:45:26.965208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:48.580 [2024-10-30 09:45:26.965253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.580 [2024-10-30 09:45:26.965266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:48.580 [2024-10-30 09:45:26.965275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.580 [2024-10-30 09:45:26.966954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.580 [2024-10-30 09:45:26.967108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:48.580 spare 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.580 [2024-10-30 09:45:26.973257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.580 [2024-10-30 09:45:26.974719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.580 [2024-10-30 09:45:26.974927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:48.580 [2024-10-30 09:45:26.974943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.580 [2024-10-30 09:45:26.975150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:48.580 [2024-10-30 09:45:26.975268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:48.580 [2024-10-30 09:45:26.975275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:48.580 [2024-10-30 09:45:26.975383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.580 09:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.580 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.580 "name": "raid_bdev1", 00:10:48.580 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:48.580 "strip_size_kb": 0, 00:10:48.580 "state": "online", 00:10:48.580 "raid_level": "raid1", 00:10:48.580 "superblock": true, 00:10:48.580 "num_base_bdevs": 2, 00:10:48.580 "num_base_bdevs_discovered": 2, 00:10:48.580 "num_base_bdevs_operational": 2, 00:10:48.580 "base_bdevs_list": [ 00:10:48.580 { 00:10:48.580 "name": "BaseBdev1", 00:10:48.580 "uuid": "251f8156-cdb6-5326-afb4-5ccbed67be7f", 00:10:48.580 "is_configured": true, 00:10:48.580 "data_offset": 2048, 00:10:48.580 "data_size": 63488 00:10:48.580 }, 00:10:48.580 { 00:10:48.580 "name": "BaseBdev2", 00:10:48.580 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:48.580 "is_configured": true, 00:10:48.580 "data_offset": 2048, 00:10:48.580 "data_size": 63488 00:10:48.580 } 00:10:48.580 ] 00:10:48.580 }' 00:10:48.580 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.580 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.839 [2024-10-30 09:45:27.293551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.839 [2024-10-30 09:45:27.353301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.839 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.839 "name": "raid_bdev1", 00:10:48.839 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:48.839 "strip_size_kb": 0, 00:10:48.839 "state": "online", 00:10:48.839 "raid_level": "raid1", 00:10:48.839 "superblock": true, 00:10:48.840 "num_base_bdevs": 2, 00:10:48.840 "num_base_bdevs_discovered": 1, 00:10:48.840 "num_base_bdevs_operational": 1, 00:10:48.840 "base_bdevs_list": [ 00:10:48.840 { 00:10:48.840 "name": null, 00:10:48.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.840 "is_configured": false, 00:10:48.840 "data_offset": 0, 00:10:48.840 "data_size": 63488 00:10:48.840 }, 00:10:48.840 { 00:10:48.840 "name": "BaseBdev2", 00:10:48.840 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:48.840 "is_configured": true, 00:10:48.840 "data_offset": 2048, 00:10:48.840 "data_size": 63488 00:10:48.840 } 00:10:48.840 ] 00:10:48.840 }' 00:10:48.840 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.840 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.840 [2024-10-30 09:45:27.437678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:48.840 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:48.840 Zero copy mechanism will not be used. 00:10:48.840 Running I/O for 60 seconds... 00:10:49.097 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:49.097 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.097 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:49.097 [2024-10-30 09:45:27.656906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:49.097 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.097 09:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:49.097 [2024-10-30 09:45:27.695995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:49.097 [2024-10-30 09:45:27.697635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:49.355 [2024-10-30 09:45:27.804616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:49.355 [2024-10-30 09:45:27.805140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:49.613 [2024-10-30 09:45:28.017125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:49.613 [2024-10-30 09:45:28.017499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:49.871 [2024-10-30 09:45:28.339109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:50.129 180.00 IOPS, 540.00 MiB/s [2024-10-30T09:45:28.750Z] 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.130 "name": "raid_bdev1", 00:10:50.130 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:50.130 "strip_size_kb": 0, 00:10:50.130 "state": "online", 00:10:50.130 "raid_level": "raid1", 00:10:50.130 "superblock": true, 00:10:50.130 "num_base_bdevs": 2, 00:10:50.130 "num_base_bdevs_discovered": 2, 00:10:50.130 "num_base_bdevs_operational": 2, 00:10:50.130 "process": { 00:10:50.130 "type": "rebuild", 00:10:50.130 "target": "spare", 00:10:50.130 "progress": { 00:10:50.130 "blocks": 14336, 00:10:50.130 "percent": 22 00:10:50.130 } 00:10:50.130 }, 00:10:50.130 "base_bdevs_list": [ 00:10:50.130 { 00:10:50.130 "name": "spare", 00:10:50.130 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:50.130 "is_configured": true, 00:10:50.130 "data_offset": 2048, 00:10:50.130 "data_size": 63488 00:10:50.130 }, 00:10:50.130 { 00:10:50.130 "name": "BaseBdev2", 00:10:50.130 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:50.130 "is_configured": true, 00:10:50.130 "data_offset": 2048, 00:10:50.130 "data_size": 63488 00:10:50.130 } 00:10:50.130 ] 00:10:50.130 }' 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:50.130 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.388 [2024-10-30 09:45:28.780511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:50.388 [2024-10-30 09:45:28.783151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:50.388 [2024-10-30 09:45:28.804522] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:50.388 [2024-10-30 09:45:28.806443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.388 [2024-10-30 09:45:28.806540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:50.388 [2024-10-30 09:45:28.806569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:50.388 [2024-10-30 09:45:28.832760] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.388 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.389 "name": "raid_bdev1", 00:10:50.389 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:50.389 "strip_size_kb": 0, 00:10:50.389 "state": "online", 00:10:50.389 "raid_level": "raid1", 00:10:50.389 "superblock": true, 00:10:50.389 "num_base_bdevs": 2, 00:10:50.389 "num_base_bdevs_discovered": 1, 00:10:50.389 "num_base_bdevs_operational": 1, 00:10:50.389 "base_bdevs_list": [ 00:10:50.389 { 00:10:50.389 "name": null, 00:10:50.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.389 "is_configured": false, 00:10:50.389 "data_offset": 0, 00:10:50.389 "data_size": 63488 00:10:50.389 }, 00:10:50.389 { 00:10:50.389 "name": "BaseBdev2", 00:10:50.389 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:50.389 "is_configured": true, 00:10:50.389 "data_offset": 2048, 00:10:50.389 "data_size": 63488 00:10:50.389 } 00:10:50.389 ] 00:10:50.389 }' 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.389 09:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.646 "name": "raid_bdev1", 00:10:50.646 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:50.646 "strip_size_kb": 0, 00:10:50.646 "state": "online", 00:10:50.646 "raid_level": "raid1", 00:10:50.646 "superblock": true, 00:10:50.646 "num_base_bdevs": 2, 00:10:50.646 "num_base_bdevs_discovered": 1, 00:10:50.646 "num_base_bdevs_operational": 1, 00:10:50.646 "base_bdevs_list": [ 00:10:50.646 { 00:10:50.646 "name": null, 00:10:50.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.646 "is_configured": false, 00:10:50.646 "data_offset": 0, 00:10:50.646 "data_size": 63488 00:10:50.646 }, 00:10:50.646 { 00:10:50.646 "name": "BaseBdev2", 00:10:50.646 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:50.646 "is_configured": true, 00:10:50.646 "data_offset": 2048, 00:10:50.646 "data_size": 63488 00:10:50.646 } 00:10:50.646 ] 00:10:50.646 }' 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.646 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:50.904 [2024-10-30 09:45:29.269739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:50.904 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.904 09:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:50.904 [2024-10-30 09:45:29.323995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:50.904 [2024-10-30 09:45:29.325626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:50.904 [2024-10-30 09:45:29.437279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:50.904 [2024-10-30 09:45:29.437635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:51.169 201.00 IOPS, 603.00 MiB/s [2024-10-30T09:45:29.789Z] [2024-10-30 09:45:29.555298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:51.169 [2024-10-30 09:45:29.555514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:51.434 [2024-10-30 09:45:29.884623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:51.434 [2024-10-30 09:45:29.884986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:51.689 [2024-10-30 09:45:30.104198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:51.689 [2024-10-30 09:45:30.104418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.689 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.947 "name": "raid_bdev1", 00:10:51.947 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:51.947 "strip_size_kb": 0, 00:10:51.947 "state": "online", 00:10:51.947 "raid_level": "raid1", 00:10:51.947 "superblock": true, 00:10:51.947 "num_base_bdevs": 2, 00:10:51.947 "num_base_bdevs_discovered": 2, 00:10:51.947 "num_base_bdevs_operational": 2, 00:10:51.947 "process": { 00:10:51.947 "type": "rebuild", 00:10:51.947 "target": "spare", 00:10:51.947 "progress": { 00:10:51.947 "blocks": 10240, 00:10:51.947 "percent": 16 00:10:51.947 } 00:10:51.947 }, 00:10:51.947 "base_bdevs_list": [ 00:10:51.947 { 00:10:51.947 "name": "spare", 00:10:51.947 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:51.947 "is_configured": true, 00:10:51.947 "data_offset": 2048, 00:10:51.947 "data_size": 63488 00:10:51.947 }, 00:10:51.947 { 00:10:51.947 "name": "BaseBdev2", 00:10:51.947 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:51.947 "is_configured": true, 00:10:51.947 "data_offset": 2048, 00:10:51.947 "data_size": 63488 00:10:51.947 } 00:10:51.947 ] 00:10:51.947 }' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:51.947 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=325 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.947 "name": "raid_bdev1", 00:10:51.947 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:51.947 "strip_size_kb": 0, 00:10:51.947 "state": "online", 00:10:51.947 "raid_level": "raid1", 00:10:51.947 "superblock": true, 00:10:51.947 "num_base_bdevs": 2, 00:10:51.947 "num_base_bdevs_discovered": 2, 00:10:51.947 "num_base_bdevs_operational": 2, 00:10:51.947 "process": { 00:10:51.947 "type": "rebuild", 00:10:51.947 "target": "spare", 00:10:51.947 "progress": { 00:10:51.947 "blocks": 12288, 00:10:51.947 "percent": 19 00:10:51.947 } 00:10:51.947 }, 00:10:51.947 "base_bdevs_list": [ 00:10:51.947 { 00:10:51.947 "name": "spare", 00:10:51.947 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:51.947 "is_configured": true, 00:10:51.947 "data_offset": 2048, 00:10:51.947 "data_size": 63488 00:10:51.947 }, 00:10:51.947 { 00:10:51.947 "name": "BaseBdev2", 00:10:51.947 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:51.947 "is_configured": true, 00:10:51.947 "data_offset": 2048, 00:10:51.947 "data_size": 63488 00:10:51.947 } 00:10:51.947 ] 00:10:51.947 }' 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:51.947 [2024-10-30 09:45:30.449236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.947 163.00 IOPS, 489.00 MiB/s [2024-10-30T09:45:30.567Z] 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:51.947 09:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:52.205 [2024-10-30 09:45:30.576254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:52.205 [2024-10-30 09:45:30.576458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:52.463 [2024-10-30 09:45:30.905199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:10:52.463 [2024-10-30 09:45:31.014372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:10:52.463 [2024-10-30 09:45:31.014592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:10:53.028 [2024-10-30 09:45:31.344568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:10:53.028 131.75 IOPS, 395.25 MiB/s [2024-10-30T09:45:31.648Z] 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.028 "name": "raid_bdev1", 00:10:53.028 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:53.028 "strip_size_kb": 0, 00:10:53.028 "state": "online", 00:10:53.028 "raid_level": "raid1", 00:10:53.028 "superblock": true, 00:10:53.028 "num_base_bdevs": 2, 00:10:53.028 "num_base_bdevs_discovered": 2, 00:10:53.028 "num_base_bdevs_operational": 2, 00:10:53.028 "process": { 00:10:53.028 "type": "rebuild", 00:10:53.028 "target": "spare", 00:10:53.028 "progress": { 00:10:53.028 "blocks": 28672, 00:10:53.028 "percent": 45 00:10:53.028 } 00:10:53.028 }, 00:10:53.028 "base_bdevs_list": [ 00:10:53.028 { 00:10:53.028 "name": "spare", 00:10:53.028 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:53.028 "is_configured": true, 00:10:53.028 "data_offset": 2048, 00:10:53.028 "data_size": 63488 00:10:53.028 }, 00:10:53.028 { 00:10:53.028 "name": "BaseBdev2", 00:10:53.028 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:53.028 "is_configured": true, 00:10:53.028 "data_offset": 2048, 00:10:53.028 "data_size": 63488 00:10:53.028 } 00:10:53.028 ] 00:10:53.028 }' 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:53.028 09:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:53.285 [2024-10-30 09:45:31.678315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:10:54.105 114.40 IOPS, 343.20 MiB/s [2024-10-30T09:45:32.725Z] 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.105 "name": "raid_bdev1", 00:10:54.105 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:54.105 "strip_size_kb": 0, 00:10:54.105 "state": "online", 00:10:54.105 "raid_level": "raid1", 00:10:54.105 "superblock": true, 00:10:54.105 "num_base_bdevs": 2, 00:10:54.105 "num_base_bdevs_discovered": 2, 00:10:54.105 "num_base_bdevs_operational": 2, 00:10:54.105 "process": { 00:10:54.105 "type": "rebuild", 00:10:54.105 "target": "spare", 00:10:54.105 "progress": { 00:10:54.105 "blocks": 49152, 00:10:54.105 "percent": 77 00:10:54.105 } 00:10:54.105 }, 00:10:54.105 "base_bdevs_list": [ 00:10:54.105 { 00:10:54.105 "name": "spare", 00:10:54.105 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:54.105 "is_configured": true, 00:10:54.105 "data_offset": 2048, 00:10:54.105 "data_size": 63488 00:10:54.105 }, 00:10:54.105 { 00:10:54.105 "name": "BaseBdev2", 00:10:54.105 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:54.105 "is_configured": true, 00:10:54.105 "data_offset": 2048, 00:10:54.105 "data_size": 63488 00:10:54.105 } 00:10:54.105 ] 00:10:54.105 }' 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:54.105 09:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:54.363 [2024-10-30 09:45:32.810217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:10:54.363 [2024-10-30 09:45:32.810410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:10:54.620 [2024-10-30 09:45:33.236256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:10:54.876 [2024-10-30 09:45:33.455949] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:55.134 101.50 IOPS, 304.50 MiB/s [2024-10-30T09:45:33.754Z] [2024-10-30 09:45:33.559884] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:55.134 [2024-10-30 09:45:33.561965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.134 "name": "raid_bdev1", 00:10:55.134 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:55.134 "strip_size_kb": 0, 00:10:55.134 "state": "online", 00:10:55.134 "raid_level": "raid1", 00:10:55.134 "superblock": true, 00:10:55.134 "num_base_bdevs": 2, 00:10:55.134 "num_base_bdevs_discovered": 2, 00:10:55.134 "num_base_bdevs_operational": 2, 00:10:55.134 "base_bdevs_list": [ 00:10:55.134 { 00:10:55.134 "name": "spare", 00:10:55.134 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:55.134 "is_configured": true, 00:10:55.134 "data_offset": 2048, 00:10:55.134 "data_size": 63488 00:10:55.134 }, 00:10:55.134 { 00:10:55.134 "name": "BaseBdev2", 00:10:55.134 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:55.134 "is_configured": true, 00:10:55.134 "data_offset": 2048, 00:10:55.134 "data_size": 63488 00:10:55.134 } 00:10:55.134 ] 00:10:55.134 }' 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:55.134 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.392 "name": "raid_bdev1", 00:10:55.392 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:55.392 "strip_size_kb": 0, 00:10:55.392 "state": "online", 00:10:55.392 "raid_level": "raid1", 00:10:55.392 "superblock": true, 00:10:55.392 "num_base_bdevs": 2, 00:10:55.392 "num_base_bdevs_discovered": 2, 00:10:55.392 "num_base_bdevs_operational": 2, 00:10:55.392 "base_bdevs_list": [ 00:10:55.392 { 00:10:55.392 "name": "spare", 00:10:55.392 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:55.392 "is_configured": true, 00:10:55.392 "data_offset": 2048, 00:10:55.392 "data_size": 63488 00:10:55.392 }, 00:10:55.392 { 00:10:55.392 "name": "BaseBdev2", 00:10:55.392 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:55.392 "is_configured": true, 00:10:55.392 "data_offset": 2048, 00:10:55.392 "data_size": 63488 00:10:55.392 } 00:10:55.392 ] 00:10:55.392 }' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.392 "name": "raid_bdev1", 00:10:55.392 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:55.392 "strip_size_kb": 0, 00:10:55.392 "state": "online", 00:10:55.392 "raid_level": "raid1", 00:10:55.392 "superblock": true, 00:10:55.392 "num_base_bdevs": 2, 00:10:55.392 "num_base_bdevs_discovered": 2, 00:10:55.392 "num_base_bdevs_operational": 2, 00:10:55.392 "base_bdevs_list": [ 00:10:55.392 { 00:10:55.392 "name": "spare", 00:10:55.392 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:55.392 "is_configured": true, 00:10:55.392 "data_offset": 2048, 00:10:55.392 "data_size": 63488 00:10:55.392 }, 00:10:55.392 { 00:10:55.392 "name": "BaseBdev2", 00:10:55.392 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:55.392 "is_configured": true, 00:10:55.392 "data_offset": 2048, 00:10:55.392 "data_size": 63488 00:10:55.392 } 00:10:55.392 ] 00:10:55.392 }' 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.392 09:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.650 [2024-10-30 09:45:34.192604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.650 [2024-10-30 09:45:34.192629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.650 00:10:55.650 Latency(us) 00:10:55.650 [2024-10-30T09:45:34.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.650 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:55.650 raid_bdev1 : 6.77 93.16 279.49 0.00 0.00 13518.17 255.21 112116.97 00:10:55.650 [2024-10-30T09:45:34.270Z] =================================================================================================================== 00:10:55.650 [2024-10-30T09:45:34.270Z] Total : 93.16 279.49 0.00 0.00 13518.17 255.21 112116.97 00:10:55.650 [2024-10-30 09:45:34.224574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.650 [2024-10-30 09:45:34.224680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.650 [2024-10-30 09:45:34.224762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.650 [2024-10-30 09:45:34.224821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:55.650 { 00:10:55.650 "results": [ 00:10:55.650 { 00:10:55.650 "job": "raid_bdev1", 00:10:55.650 "core_mask": "0x1", 00:10:55.650 "workload": "randrw", 00:10:55.650 "percentage": 50, 00:10:55.650 "status": "finished", 00:10:55.650 "queue_depth": 2, 00:10:55.650 "io_size": 3145728, 00:10:55.650 "runtime": 6.773075, 00:10:55.650 "iops": 93.1630020337882, 00:10:55.650 "mibps": 279.4890061013646, 00:10:55.650 "io_failed": 0, 00:10:55.650 "io_timeout": 0, 00:10:55.650 "avg_latency_us": 13518.170659514812, 00:10:55.650 "min_latency_us": 255.2123076923077, 00:10:55.650 "max_latency_us": 112116.97230769231 00:10:55.650 } 00:10:55.650 ], 00:10:55.650 "core_count": 1 00:10:55.650 } 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:55.650 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:55.908 /dev/nbd0 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.908 1+0 records in 00:10:55.908 1+0 records out 00:10:55.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282638 s, 14.5 MB/s 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:55.908 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:56.166 /dev/nbd1 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.166 1+0 records in 00:10:56.166 1+0 records out 00:10:56.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289276 s, 14.2 MB/s 00:10:56.166 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.423 09:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.681 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 [2024-10-30 09:45:35.352771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:56.939 [2024-10-30 09:45:35.352817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.939 [2024-10-30 09:45:35.352833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:56.939 [2024-10-30 09:45:35.352842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.939 [2024-10-30 09:45:35.354703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.939 [2024-10-30 09:45:35.354835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:56.939 [2024-10-30 09:45:35.354918] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:56.939 [2024-10-30 09:45:35.354961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.939 [2024-10-30 09:45:35.355086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.939 spare 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 [2024-10-30 09:45:35.455169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:56.939 [2024-10-30 09:45:35.455192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.939 [2024-10-30 09:45:35.455437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:10:56.939 [2024-10-30 09:45:35.455575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:56.939 [2024-10-30 09:45:35.455586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:56.939 [2024-10-30 09:45:35.455721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.939 "name": "raid_bdev1", 00:10:56.939 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:56.939 "strip_size_kb": 0, 00:10:56.939 "state": "online", 00:10:56.939 "raid_level": "raid1", 00:10:56.939 "superblock": true, 00:10:56.939 "num_base_bdevs": 2, 00:10:56.939 "num_base_bdevs_discovered": 2, 00:10:56.939 "num_base_bdevs_operational": 2, 00:10:56.939 "base_bdevs_list": [ 00:10:56.939 { 00:10:56.939 "name": "spare", 00:10:56.939 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:56.939 "is_configured": true, 00:10:56.939 "data_offset": 2048, 00:10:56.939 "data_size": 63488 00:10:56.939 }, 00:10:56.939 { 00:10:56.939 "name": "BaseBdev2", 00:10:56.939 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:56.939 "is_configured": true, 00:10:56.939 "data_offset": 2048, 00:10:56.939 "data_size": 63488 00:10:56.939 } 00:10:56.939 ] 00:10:56.939 }' 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.939 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.197 "name": "raid_bdev1", 00:10:57.197 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:57.197 "strip_size_kb": 0, 00:10:57.197 "state": "online", 00:10:57.197 "raid_level": "raid1", 00:10:57.197 "superblock": true, 00:10:57.197 "num_base_bdevs": 2, 00:10:57.197 "num_base_bdevs_discovered": 2, 00:10:57.197 "num_base_bdevs_operational": 2, 00:10:57.197 "base_bdevs_list": [ 00:10:57.197 { 00:10:57.197 "name": "spare", 00:10:57.197 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:57.197 "is_configured": true, 00:10:57.197 "data_offset": 2048, 00:10:57.197 "data_size": 63488 00:10:57.197 }, 00:10:57.197 { 00:10:57.197 "name": "BaseBdev2", 00:10:57.197 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:57.197 "is_configured": true, 00:10:57.197 "data_offset": 2048, 00:10:57.197 "data_size": 63488 00:10:57.197 } 00:10:57.197 ] 00:10:57.197 }' 00:10:57.197 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.455 [2024-10-30 09:45:35.897013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.455 "name": "raid_bdev1", 00:10:57.455 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:57.455 "strip_size_kb": 0, 00:10:57.455 "state": "online", 00:10:57.455 "raid_level": "raid1", 00:10:57.455 "superblock": true, 00:10:57.455 "num_base_bdevs": 2, 00:10:57.455 "num_base_bdevs_discovered": 1, 00:10:57.455 "num_base_bdevs_operational": 1, 00:10:57.455 "base_bdevs_list": [ 00:10:57.455 { 00:10:57.455 "name": null, 00:10:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.455 "is_configured": false, 00:10:57.455 "data_offset": 0, 00:10:57.455 "data_size": 63488 00:10:57.455 }, 00:10:57.455 { 00:10:57.455 "name": "BaseBdev2", 00:10:57.455 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:57.455 "is_configured": true, 00:10:57.455 "data_offset": 2048, 00:10:57.455 "data_size": 63488 00:10:57.455 } 00:10:57.455 ] 00:10:57.455 }' 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.455 09:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.713 09:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:57.713 09:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.713 09:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.713 [2024-10-30 09:45:36.213134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.713 [2024-10-30 09:45:36.213360] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:57.713 [2024-10-30 09:45:36.213377] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:57.713 [2024-10-30 09:45:36.213411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.713 [2024-10-30 09:45:36.222704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:10:57.713 09:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.713 09:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:57.713 [2024-10-30 09:45:36.224266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.642 "name": "raid_bdev1", 00:10:58.642 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:58.642 "strip_size_kb": 0, 00:10:58.642 "state": "online", 00:10:58.642 "raid_level": "raid1", 00:10:58.642 "superblock": true, 00:10:58.642 "num_base_bdevs": 2, 00:10:58.642 "num_base_bdevs_discovered": 2, 00:10:58.642 "num_base_bdevs_operational": 2, 00:10:58.642 "process": { 00:10:58.642 "type": "rebuild", 00:10:58.642 "target": "spare", 00:10:58.642 "progress": { 00:10:58.642 "blocks": 20480, 00:10:58.642 "percent": 32 00:10:58.642 } 00:10:58.642 }, 00:10:58.642 "base_bdevs_list": [ 00:10:58.642 { 00:10:58.642 "name": "spare", 00:10:58.642 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:10:58.642 "is_configured": true, 00:10:58.642 "data_offset": 2048, 00:10:58.642 "data_size": 63488 00:10:58.642 }, 00:10:58.642 { 00:10:58.642 "name": "BaseBdev2", 00:10:58.642 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:58.642 "is_configured": true, 00:10:58.642 "data_offset": 2048, 00:10:58.642 "data_size": 63488 00:10:58.642 } 00:10:58.642 ] 00:10:58.642 }' 00:10:58.642 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.901 [2024-10-30 09:45:37.330551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:58.901 [2024-10-30 09:45:37.429504] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:58.901 [2024-10-30 09:45:37.429645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.901 [2024-10-30 09:45:37.429700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:58.901 [2024-10-30 09:45:37.429720] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.901 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.901 "name": "raid_bdev1", 00:10:58.902 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:10:58.902 "strip_size_kb": 0, 00:10:58.902 "state": "online", 00:10:58.902 "raid_level": "raid1", 00:10:58.902 "superblock": true, 00:10:58.902 "num_base_bdevs": 2, 00:10:58.902 "num_base_bdevs_discovered": 1, 00:10:58.902 "num_base_bdevs_operational": 1, 00:10:58.902 "base_bdevs_list": [ 00:10:58.902 { 00:10:58.902 "name": null, 00:10:58.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.902 "is_configured": false, 00:10:58.902 "data_offset": 0, 00:10:58.902 "data_size": 63488 00:10:58.902 }, 00:10:58.902 { 00:10:58.902 "name": "BaseBdev2", 00:10:58.902 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:10:58.902 "is_configured": true, 00:10:58.902 "data_offset": 2048, 00:10:58.902 "data_size": 63488 00:10:58.902 } 00:10:58.902 ] 00:10:58.902 }' 00:10:58.902 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.902 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:59.160 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.160 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.160 [2024-10-30 09:45:37.757736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:59.160 [2024-10-30 09:45:37.757874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.160 [2024-10-30 09:45:37.757898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:59.160 [2024-10-30 09:45:37.757906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.160 [2024-10-30 09:45:37.758299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.160 [2024-10-30 09:45:37.758318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:59.160 [2024-10-30 09:45:37.758397] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:59.160 [2024-10-30 09:45:37.758407] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:59.160 [2024-10-30 09:45:37.758418] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:59.160 [2024-10-30 09:45:37.758435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:59.160 [2024-10-30 09:45:37.767562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:10:59.160 spare 00:10:59.160 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.160 09:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:59.160 [2024-10-30 09:45:37.769220] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.535 "name": "raid_bdev1", 00:11:00.535 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:00.535 "strip_size_kb": 0, 00:11:00.535 "state": "online", 00:11:00.535 "raid_level": "raid1", 00:11:00.535 "superblock": true, 00:11:00.535 "num_base_bdevs": 2, 00:11:00.535 "num_base_bdevs_discovered": 2, 00:11:00.535 "num_base_bdevs_operational": 2, 00:11:00.535 "process": { 00:11:00.535 "type": "rebuild", 00:11:00.535 "target": "spare", 00:11:00.535 "progress": { 00:11:00.535 "blocks": 20480, 00:11:00.535 "percent": 32 00:11:00.535 } 00:11:00.535 }, 00:11:00.535 "base_bdevs_list": [ 00:11:00.535 { 00:11:00.535 "name": "spare", 00:11:00.535 "uuid": "aadfc2f7-7d0e-5506-95b8-54e665634f59", 00:11:00.535 "is_configured": true, 00:11:00.535 "data_offset": 2048, 00:11:00.535 "data_size": 63488 00:11:00.535 }, 00:11:00.535 { 00:11:00.535 "name": "BaseBdev2", 00:11:00.535 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:00.535 "is_configured": true, 00:11:00.535 "data_offset": 2048, 00:11:00.535 "data_size": 63488 00:11:00.535 } 00:11:00.535 ] 00:11:00.535 }' 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 [2024-10-30 09:45:38.867502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.535 [2024-10-30 09:45:38.874031] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:00.535 [2024-10-30 09:45:38.874085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.535 [2024-10-30 09:45:38.874106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.535 [2024-10-30 09:45:38.874116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.535 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.535 "name": "raid_bdev1", 00:11:00.535 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:00.535 "strip_size_kb": 0, 00:11:00.535 "state": "online", 00:11:00.535 "raid_level": "raid1", 00:11:00.535 "superblock": true, 00:11:00.535 "num_base_bdevs": 2, 00:11:00.535 "num_base_bdevs_discovered": 1, 00:11:00.535 "num_base_bdevs_operational": 1, 00:11:00.535 "base_bdevs_list": [ 00:11:00.535 { 00:11:00.535 "name": null, 00:11:00.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.535 "is_configured": false, 00:11:00.535 "data_offset": 0, 00:11:00.535 "data_size": 63488 00:11:00.536 }, 00:11:00.536 { 00:11:00.536 "name": "BaseBdev2", 00:11:00.536 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:00.536 "is_configured": true, 00:11:00.536 "data_offset": 2048, 00:11:00.536 "data_size": 63488 00:11:00.536 } 00:11:00.536 ] 00:11:00.536 }' 00:11:00.536 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.536 09:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.794 "name": "raid_bdev1", 00:11:00.794 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:00.794 "strip_size_kb": 0, 00:11:00.794 "state": "online", 00:11:00.794 "raid_level": "raid1", 00:11:00.794 "superblock": true, 00:11:00.794 "num_base_bdevs": 2, 00:11:00.794 "num_base_bdevs_discovered": 1, 00:11:00.794 "num_base_bdevs_operational": 1, 00:11:00.794 "base_bdevs_list": [ 00:11:00.794 { 00:11:00.794 "name": null, 00:11:00.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.794 "is_configured": false, 00:11:00.794 "data_offset": 0, 00:11:00.794 "data_size": 63488 00:11:00.794 }, 00:11:00.794 { 00:11:00.794 "name": "BaseBdev2", 00:11:00.794 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:00.794 "is_configured": true, 00:11:00.794 "data_offset": 2048, 00:11:00.794 "data_size": 63488 00:11:00.794 } 00:11:00.794 ] 00:11:00.794 }' 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 [2024-10-30 09:45:39.318618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:00.794 [2024-10-30 09:45:39.318664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.794 [2024-10-30 09:45:39.318680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:00.794 [2024-10-30 09:45:39.318689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.794 [2024-10-30 09:45:39.319033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.794 [2024-10-30 09:45:39.319052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.794 [2024-10-30 09:45:39.319122] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:00.794 [2024-10-30 09:45:39.319135] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:00.794 [2024-10-30 09:45:39.319141] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:00.794 [2024-10-30 09:45:39.319152] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:00.794 BaseBdev1 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 09:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.725 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.982 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.982 "name": "raid_bdev1", 00:11:01.982 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:01.982 "strip_size_kb": 0, 00:11:01.982 "state": "online", 00:11:01.982 "raid_level": "raid1", 00:11:01.982 "superblock": true, 00:11:01.983 "num_base_bdevs": 2, 00:11:01.983 "num_base_bdevs_discovered": 1, 00:11:01.983 "num_base_bdevs_operational": 1, 00:11:01.983 "base_bdevs_list": [ 00:11:01.983 { 00:11:01.983 "name": null, 00:11:01.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.983 "is_configured": false, 00:11:01.983 "data_offset": 0, 00:11:01.983 "data_size": 63488 00:11:01.983 }, 00:11:01.983 { 00:11:01.983 "name": "BaseBdev2", 00:11:01.983 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:01.983 "is_configured": true, 00:11:01.983 "data_offset": 2048, 00:11:01.983 "data_size": 63488 00:11:01.983 } 00:11:01.983 ] 00:11:01.983 }' 00:11:01.983 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.983 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.241 "name": "raid_bdev1", 00:11:02.241 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:02.241 "strip_size_kb": 0, 00:11:02.241 "state": "online", 00:11:02.241 "raid_level": "raid1", 00:11:02.241 "superblock": true, 00:11:02.241 "num_base_bdevs": 2, 00:11:02.241 "num_base_bdevs_discovered": 1, 00:11:02.241 "num_base_bdevs_operational": 1, 00:11:02.241 "base_bdevs_list": [ 00:11:02.241 { 00:11:02.241 "name": null, 00:11:02.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.241 "is_configured": false, 00:11:02.241 "data_offset": 0, 00:11:02.241 "data_size": 63488 00:11:02.241 }, 00:11:02.241 { 00:11:02.241 "name": "BaseBdev2", 00:11:02.241 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:02.241 "is_configured": true, 00:11:02.241 "data_offset": 2048, 00:11:02.241 "data_size": 63488 00:11:02.241 } 00:11:02.241 ] 00:11:02.241 }' 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.241 [2024-10-30 09:45:40.763082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.241 [2024-10-30 09:45:40.763293] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:02.241 [2024-10-30 09:45:40.763307] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:02.241 request: 00:11:02.241 { 00:11:02.241 "base_bdev": "BaseBdev1", 00:11:02.241 "raid_bdev": "raid_bdev1", 00:11:02.241 "method": "bdev_raid_add_base_bdev", 00:11:02.241 "req_id": 1 00:11:02.241 } 00:11:02.241 Got JSON-RPC error response 00:11:02.241 response: 00:11:02.241 { 00:11:02.241 "code": -22, 00:11:02.241 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:02.241 } 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.241 09:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.232 "name": "raid_bdev1", 00:11:03.232 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:03.232 "strip_size_kb": 0, 00:11:03.232 "state": "online", 00:11:03.232 "raid_level": "raid1", 00:11:03.232 "superblock": true, 00:11:03.232 "num_base_bdevs": 2, 00:11:03.232 "num_base_bdevs_discovered": 1, 00:11:03.232 "num_base_bdevs_operational": 1, 00:11:03.232 "base_bdevs_list": [ 00:11:03.232 { 00:11:03.232 "name": null, 00:11:03.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.232 "is_configured": false, 00:11:03.232 "data_offset": 0, 00:11:03.232 "data_size": 63488 00:11:03.232 }, 00:11:03.232 { 00:11:03.232 "name": "BaseBdev2", 00:11:03.232 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:03.232 "is_configured": true, 00:11:03.232 "data_offset": 2048, 00:11:03.232 "data_size": 63488 00:11:03.232 } 00:11:03.232 ] 00:11:03.232 }' 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.232 09:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.491 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.748 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.748 "name": "raid_bdev1", 00:11:03.748 "uuid": "c024149c-6e98-4e62-9630-be7598f2e0a9", 00:11:03.748 "strip_size_kb": 0, 00:11:03.748 "state": "online", 00:11:03.748 "raid_level": "raid1", 00:11:03.748 "superblock": true, 00:11:03.748 "num_base_bdevs": 2, 00:11:03.748 "num_base_bdevs_discovered": 1, 00:11:03.748 "num_base_bdevs_operational": 1, 00:11:03.748 "base_bdevs_list": [ 00:11:03.748 { 00:11:03.748 "name": null, 00:11:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.748 "is_configured": false, 00:11:03.748 "data_offset": 0, 00:11:03.748 "data_size": 63488 00:11:03.748 }, 00:11:03.748 { 00:11:03.748 "name": "BaseBdev2", 00:11:03.748 "uuid": "65ab83ba-4179-5a40-a10a-e293ca134236", 00:11:03.748 "is_configured": true, 00:11:03.748 "data_offset": 2048, 00:11:03.748 "data_size": 63488 00:11:03.748 } 00:11:03.748 ] 00:11:03.748 }' 00:11:03.748 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.748 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:03.748 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74832 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 74832 ']' 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 74832 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74832 00:11:03.749 killing process with pid 74832 00:11:03.749 Received shutdown signal, test time was about 14.773220 seconds 00:11:03.749 00:11:03.749 Latency(us) 00:11:03.749 [2024-10-30T09:45:42.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.749 [2024-10-30T09:45:42.369Z] =================================================================================================================== 00:11:03.749 [2024-10-30T09:45:42.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74832' 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 74832 00:11:03.749 [2024-10-30 09:45:42.212715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.749 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 74832 00:11:03.749 [2024-10-30 09:45:42.212811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.749 [2024-10-30 09:45:42.212856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.749 [2024-10-30 09:45:42.212863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:03.749 [2024-10-30 09:45:42.324326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.314 ************************************ 00:11:04.314 END TEST raid_rebuild_test_sb_io 00:11:04.314 ************************************ 00:11:04.314 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:04.314 00:11:04.314 real 0m16.968s 00:11:04.314 user 0m21.690s 00:11:04.314 sys 0m1.383s 00:11:04.314 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.314 09:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.572 09:45:42 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:04.572 09:45:42 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:04.572 09:45:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:04.572 09:45:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.572 09:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.572 ************************************ 00:11:04.572 START TEST raid_rebuild_test 00:11:04.572 ************************************ 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:04.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75493 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75493 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75493 ']' 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.572 09:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:04.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:04.572 Zero copy mechanism will not be used. 00:11:04.572 [2024-10-30 09:45:43.019831] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:11:04.572 [2024-10-30 09:45:43.019950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75493 ] 00:11:04.572 [2024-10-30 09:45:43.178780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.830 [2024-10-30 09:45:43.279021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.830 [2024-10-30 09:45:43.415270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.830 [2024-10-30 09:45:43.415298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.395 BaseBdev1_malloc 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.395 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.395 [2024-10-30 09:45:43.859312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:05.395 [2024-10-30 09:45:43.859376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.395 [2024-10-30 09:45:43.859397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:05.395 [2024-10-30 09:45:43.859409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.395 [2024-10-30 09:45:43.861524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.396 [2024-10-30 09:45:43.861687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.396 BaseBdev1 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 BaseBdev2_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 [2024-10-30 09:45:43.895079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:05.396 [2024-10-30 09:45:43.895127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.396 [2024-10-30 09:45:43.895143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:05.396 [2024-10-30 09:45:43.895154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.396 [2024-10-30 09:45:43.897222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.396 [2024-10-30 09:45:43.897257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.396 BaseBdev2 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 BaseBdev3_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 [2024-10-30 09:45:43.943952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:05.396 [2024-10-30 09:45:43.944005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.396 [2024-10-30 09:45:43.944025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:05.396 [2024-10-30 09:45:43.944036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.396 [2024-10-30 09:45:43.946116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.396 [2024-10-30 09:45:43.946270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.396 BaseBdev3 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 BaseBdev4_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 [2024-10-30 09:45:43.980071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:05.396 [2024-10-30 09:45:43.980119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.396 [2024-10-30 09:45:43.980137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:05.396 [2024-10-30 09:45:43.980146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.396 [2024-10-30 09:45:43.982240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.396 [2024-10-30 09:45:43.982276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:05.396 BaseBdev4 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 spare_malloc 00:11:05.396 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.396 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:05.396 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.396 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.654 spare_delay 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.654 [2024-10-30 09:45:44.024107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:05.654 [2024-10-30 09:45:44.024156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.654 [2024-10-30 09:45:44.024172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:05.654 [2024-10-30 09:45:44.024181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.654 [2024-10-30 09:45:44.026297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.654 [2024-10-30 09:45:44.026332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:05.654 spare 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.654 [2024-10-30 09:45:44.032159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.654 [2024-10-30 09:45:44.033958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.654 [2024-10-30 09:45:44.034135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.654 [2024-10-30 09:45:44.034193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.654 [2024-10-30 09:45:44.034275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:05.654 [2024-10-30 09:45:44.034287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:05.654 [2024-10-30 09:45:44.034551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:05.654 [2024-10-30 09:45:44.034699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:05.654 [2024-10-30 09:45:44.034710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:05.654 [2024-10-30 09:45:44.034850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.654 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.655 "name": "raid_bdev1", 00:11:05.655 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:05.655 "strip_size_kb": 0, 00:11:05.655 "state": "online", 00:11:05.655 "raid_level": "raid1", 00:11:05.655 "superblock": false, 00:11:05.655 "num_base_bdevs": 4, 00:11:05.655 "num_base_bdevs_discovered": 4, 00:11:05.655 "num_base_bdevs_operational": 4, 00:11:05.655 "base_bdevs_list": [ 00:11:05.655 { 00:11:05.655 "name": "BaseBdev1", 00:11:05.655 "uuid": "35c74508-254a-54f2-8626-8c562c570528", 00:11:05.655 "is_configured": true, 00:11:05.655 "data_offset": 0, 00:11:05.655 "data_size": 65536 00:11:05.655 }, 00:11:05.655 { 00:11:05.655 "name": "BaseBdev2", 00:11:05.655 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:05.655 "is_configured": true, 00:11:05.655 "data_offset": 0, 00:11:05.655 "data_size": 65536 00:11:05.655 }, 00:11:05.655 { 00:11:05.655 "name": "BaseBdev3", 00:11:05.655 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:05.655 "is_configured": true, 00:11:05.655 "data_offset": 0, 00:11:05.655 "data_size": 65536 00:11:05.655 }, 00:11:05.655 { 00:11:05.655 "name": "BaseBdev4", 00:11:05.655 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:05.655 "is_configured": true, 00:11:05.655 "data_offset": 0, 00:11:05.655 "data_size": 65536 00:11:05.655 } 00:11:05.655 ] 00:11:05.655 }' 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.655 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.912 [2024-10-30 09:45:44.352566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:05.912 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:05.913 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:06.170 [2024-10-30 09:45:44.596305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:06.170 /dev/nbd0 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.170 1+0 records in 00:11:06.170 1+0 records out 00:11:06.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385996 s, 10.6 MB/s 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:06.170 09:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:12.814 65536+0 records in 00:11:12.814 65536+0 records out 00:11:12.814 33554432 bytes (34 MB, 32 MiB) copied, 5.93883 s, 5.7 MB/s 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:12.814 [2024-10-30 09:45:50.788702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:12.814 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.815 [2024-10-30 09:45:50.812758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.815 "name": "raid_bdev1", 00:11:12.815 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:12.815 "strip_size_kb": 0, 00:11:12.815 "state": "online", 00:11:12.815 "raid_level": "raid1", 00:11:12.815 "superblock": false, 00:11:12.815 "num_base_bdevs": 4, 00:11:12.815 "num_base_bdevs_discovered": 3, 00:11:12.815 "num_base_bdevs_operational": 3, 00:11:12.815 "base_bdevs_list": [ 00:11:12.815 { 00:11:12.815 "name": null, 00:11:12.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.815 "is_configured": false, 00:11:12.815 "data_offset": 0, 00:11:12.815 "data_size": 65536 00:11:12.815 }, 00:11:12.815 { 00:11:12.815 "name": "BaseBdev2", 00:11:12.815 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:12.815 "is_configured": true, 00:11:12.815 "data_offset": 0, 00:11:12.815 "data_size": 65536 00:11:12.815 }, 00:11:12.815 { 00:11:12.815 "name": "BaseBdev3", 00:11:12.815 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:12.815 "is_configured": true, 00:11:12.815 "data_offset": 0, 00:11:12.815 "data_size": 65536 00:11:12.815 }, 00:11:12.815 { 00:11:12.815 "name": "BaseBdev4", 00:11:12.815 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:12.815 "is_configured": true, 00:11:12.815 "data_offset": 0, 00:11:12.815 "data_size": 65536 00:11:12.815 } 00:11:12.815 ] 00:11:12.815 }' 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.815 09:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.815 09:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:12.815 09:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.815 09:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.815 [2024-10-30 09:45:51.140812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:12.815 [2024-10-30 09:45:51.148939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:11:12.815 09:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.815 09:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:12.815 [2024-10-30 09:45:51.150573] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.744 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.745 "name": "raid_bdev1", 00:11:13.745 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:13.745 "strip_size_kb": 0, 00:11:13.745 "state": "online", 00:11:13.745 "raid_level": "raid1", 00:11:13.745 "superblock": false, 00:11:13.745 "num_base_bdevs": 4, 00:11:13.745 "num_base_bdevs_discovered": 4, 00:11:13.745 "num_base_bdevs_operational": 4, 00:11:13.745 "process": { 00:11:13.745 "type": "rebuild", 00:11:13.745 "target": "spare", 00:11:13.745 "progress": { 00:11:13.745 "blocks": 20480, 00:11:13.745 "percent": 31 00:11:13.745 } 00:11:13.745 }, 00:11:13.745 "base_bdevs_list": [ 00:11:13.745 { 00:11:13.745 "name": "spare", 00:11:13.745 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev2", 00:11:13.745 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev3", 00:11:13.745 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev4", 00:11:13.745 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 } 00:11:13.745 ] 00:11:13.745 }' 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.745 [2024-10-30 09:45:52.252782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:13.745 [2024-10-30 09:45:52.255228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:13.745 [2024-10-30 09:45:52.255278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.745 [2024-10-30 09:45:52.255291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:13.745 [2024-10-30 09:45:52.255298] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.745 "name": "raid_bdev1", 00:11:13.745 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:13.745 "strip_size_kb": 0, 00:11:13.745 "state": "online", 00:11:13.745 "raid_level": "raid1", 00:11:13.745 "superblock": false, 00:11:13.745 "num_base_bdevs": 4, 00:11:13.745 "num_base_bdevs_discovered": 3, 00:11:13.745 "num_base_bdevs_operational": 3, 00:11:13.745 "base_bdevs_list": [ 00:11:13.745 { 00:11:13.745 "name": null, 00:11:13.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.745 "is_configured": false, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev2", 00:11:13.745 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev3", 00:11:13.745 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 }, 00:11:13.745 { 00:11:13.745 "name": "BaseBdev4", 00:11:13.745 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:13.745 "is_configured": true, 00:11:13.745 "data_offset": 0, 00:11:13.745 "data_size": 65536 00:11:13.745 } 00:11:13.745 ] 00:11:13.745 }' 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.745 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.003 "name": "raid_bdev1", 00:11:14.003 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:14.003 "strip_size_kb": 0, 00:11:14.003 "state": "online", 00:11:14.003 "raid_level": "raid1", 00:11:14.003 "superblock": false, 00:11:14.003 "num_base_bdevs": 4, 00:11:14.003 "num_base_bdevs_discovered": 3, 00:11:14.003 "num_base_bdevs_operational": 3, 00:11:14.003 "base_bdevs_list": [ 00:11:14.003 { 00:11:14.003 "name": null, 00:11:14.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.003 "is_configured": false, 00:11:14.003 "data_offset": 0, 00:11:14.003 "data_size": 65536 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "name": "BaseBdev2", 00:11:14.003 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:14.003 "is_configured": true, 00:11:14.003 "data_offset": 0, 00:11:14.003 "data_size": 65536 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "name": "BaseBdev3", 00:11:14.003 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:14.003 "is_configured": true, 00:11:14.003 "data_offset": 0, 00:11:14.003 "data_size": 65536 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "name": "BaseBdev4", 00:11:14.003 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:14.003 "is_configured": true, 00:11:14.003 "data_offset": 0, 00:11:14.003 "data_size": 65536 00:11:14.003 } 00:11:14.003 ] 00:11:14.003 }' 00:11:14.003 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.260 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.260 [2024-10-30 09:45:52.667190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:14.260 [2024-10-30 09:45:52.674710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:11:14.261 09:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.261 09:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:14.261 [2024-10-30 09:45:52.676325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.195 "name": "raid_bdev1", 00:11:15.195 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:15.195 "strip_size_kb": 0, 00:11:15.195 "state": "online", 00:11:15.195 "raid_level": "raid1", 00:11:15.195 "superblock": false, 00:11:15.195 "num_base_bdevs": 4, 00:11:15.195 "num_base_bdevs_discovered": 4, 00:11:15.195 "num_base_bdevs_operational": 4, 00:11:15.195 "process": { 00:11:15.195 "type": "rebuild", 00:11:15.195 "target": "spare", 00:11:15.195 "progress": { 00:11:15.195 "blocks": 20480, 00:11:15.195 "percent": 31 00:11:15.195 } 00:11:15.195 }, 00:11:15.195 "base_bdevs_list": [ 00:11:15.195 { 00:11:15.195 "name": "spare", 00:11:15.195 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:15.195 "is_configured": true, 00:11:15.195 "data_offset": 0, 00:11:15.195 "data_size": 65536 00:11:15.195 }, 00:11:15.195 { 00:11:15.195 "name": "BaseBdev2", 00:11:15.195 "uuid": "aee954bf-9e2a-520a-a3e7-7248e075e677", 00:11:15.195 "is_configured": true, 00:11:15.195 "data_offset": 0, 00:11:15.195 "data_size": 65536 00:11:15.195 }, 00:11:15.195 { 00:11:15.195 "name": "BaseBdev3", 00:11:15.195 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:15.195 "is_configured": true, 00:11:15.195 "data_offset": 0, 00:11:15.195 "data_size": 65536 00:11:15.195 }, 00:11:15.195 { 00:11:15.195 "name": "BaseBdev4", 00:11:15.195 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:15.195 "is_configured": true, 00:11:15.195 "data_offset": 0, 00:11:15.195 "data_size": 65536 00:11:15.195 } 00:11:15.195 ] 00:11:15.195 }' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.195 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.195 [2024-10-30 09:45:53.790517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.454 [2024-10-30 09:45:53.881551] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.454 "name": "raid_bdev1", 00:11:15.454 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:15.454 "strip_size_kb": 0, 00:11:15.454 "state": "online", 00:11:15.454 "raid_level": "raid1", 00:11:15.454 "superblock": false, 00:11:15.454 "num_base_bdevs": 4, 00:11:15.454 "num_base_bdevs_discovered": 3, 00:11:15.454 "num_base_bdevs_operational": 3, 00:11:15.454 "process": { 00:11:15.454 "type": "rebuild", 00:11:15.454 "target": "spare", 00:11:15.454 "progress": { 00:11:15.454 "blocks": 24576, 00:11:15.454 "percent": 37 00:11:15.454 } 00:11:15.454 }, 00:11:15.454 "base_bdevs_list": [ 00:11:15.454 { 00:11:15.454 "name": "spare", 00:11:15.454 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": null, 00:11:15.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.454 "is_configured": false, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev3", 00:11:15.454 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev4", 00:11:15.454 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 } 00:11:15.454 ] 00:11:15.454 }' 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=348 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.454 09:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.454 "name": "raid_bdev1", 00:11:15.454 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:15.454 "strip_size_kb": 0, 00:11:15.454 "state": "online", 00:11:15.454 "raid_level": "raid1", 00:11:15.454 "superblock": false, 00:11:15.454 "num_base_bdevs": 4, 00:11:15.454 "num_base_bdevs_discovered": 3, 00:11:15.454 "num_base_bdevs_operational": 3, 00:11:15.454 "process": { 00:11:15.454 "type": "rebuild", 00:11:15.454 "target": "spare", 00:11:15.454 "progress": { 00:11:15.454 "blocks": 24576, 00:11:15.454 "percent": 37 00:11:15.454 } 00:11:15.454 }, 00:11:15.454 "base_bdevs_list": [ 00:11:15.454 { 00:11:15.454 "name": "spare", 00:11:15.454 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": null, 00:11:15.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.454 "is_configured": false, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev3", 00:11:15.454 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev4", 00:11:15.454 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 0, 00:11:15.454 "data_size": 65536 00:11:15.454 } 00:11:15.454 ] 00:11:15.454 }' 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.454 09:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.830 "name": "raid_bdev1", 00:11:16.830 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:16.830 "strip_size_kb": 0, 00:11:16.830 "state": "online", 00:11:16.830 "raid_level": "raid1", 00:11:16.830 "superblock": false, 00:11:16.830 "num_base_bdevs": 4, 00:11:16.830 "num_base_bdevs_discovered": 3, 00:11:16.830 "num_base_bdevs_operational": 3, 00:11:16.830 "process": { 00:11:16.830 "type": "rebuild", 00:11:16.830 "target": "spare", 00:11:16.830 "progress": { 00:11:16.830 "blocks": 47104, 00:11:16.830 "percent": 71 00:11:16.830 } 00:11:16.830 }, 00:11:16.830 "base_bdevs_list": [ 00:11:16.830 { 00:11:16.830 "name": "spare", 00:11:16.830 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:16.830 "is_configured": true, 00:11:16.830 "data_offset": 0, 00:11:16.830 "data_size": 65536 00:11:16.830 }, 00:11:16.830 { 00:11:16.830 "name": null, 00:11:16.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.830 "is_configured": false, 00:11:16.830 "data_offset": 0, 00:11:16.830 "data_size": 65536 00:11:16.830 }, 00:11:16.830 { 00:11:16.830 "name": "BaseBdev3", 00:11:16.830 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:16.830 "is_configured": true, 00:11:16.830 "data_offset": 0, 00:11:16.830 "data_size": 65536 00:11:16.830 }, 00:11:16.830 { 00:11:16.830 "name": "BaseBdev4", 00:11:16.830 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:16.830 "is_configured": true, 00:11:16.830 "data_offset": 0, 00:11:16.830 "data_size": 65536 00:11:16.830 } 00:11:16.830 ] 00:11:16.830 }' 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.830 09:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:17.396 [2024-10-30 09:45:55.890446] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:17.396 [2024-10-30 09:45:55.890522] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:17.396 [2024-10-30 09:45:55.890566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.720 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:17.720 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.720 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.720 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.721 "name": "raid_bdev1", 00:11:17.721 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:17.721 "strip_size_kb": 0, 00:11:17.721 "state": "online", 00:11:17.721 "raid_level": "raid1", 00:11:17.721 "superblock": false, 00:11:17.721 "num_base_bdevs": 4, 00:11:17.721 "num_base_bdevs_discovered": 3, 00:11:17.721 "num_base_bdevs_operational": 3, 00:11:17.721 "base_bdevs_list": [ 00:11:17.721 { 00:11:17.721 "name": "spare", 00:11:17.721 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": null, 00:11:17.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.721 "is_configured": false, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": "BaseBdev3", 00:11:17.721 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": "BaseBdev4", 00:11:17.721 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 } 00:11:17.721 ] 00:11:17.721 }' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.721 "name": "raid_bdev1", 00:11:17.721 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:17.721 "strip_size_kb": 0, 00:11:17.721 "state": "online", 00:11:17.721 "raid_level": "raid1", 00:11:17.721 "superblock": false, 00:11:17.721 "num_base_bdevs": 4, 00:11:17.721 "num_base_bdevs_discovered": 3, 00:11:17.721 "num_base_bdevs_operational": 3, 00:11:17.721 "base_bdevs_list": [ 00:11:17.721 { 00:11:17.721 "name": "spare", 00:11:17.721 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": null, 00:11:17.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.721 "is_configured": false, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": "BaseBdev3", 00:11:17.721 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 }, 00:11:17.721 { 00:11:17.721 "name": "BaseBdev4", 00:11:17.721 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:17.721 "is_configured": true, 00:11:17.721 "data_offset": 0, 00:11:17.721 "data_size": 65536 00:11:17.721 } 00:11:17.721 ] 00:11:17.721 }' 00:11:17.721 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.996 "name": "raid_bdev1", 00:11:17.996 "uuid": "960cb5d7-7a3b-4a31-9448-a196b07e779f", 00:11:17.996 "strip_size_kb": 0, 00:11:17.996 "state": "online", 00:11:17.996 "raid_level": "raid1", 00:11:17.996 "superblock": false, 00:11:17.996 "num_base_bdevs": 4, 00:11:17.996 "num_base_bdevs_discovered": 3, 00:11:17.996 "num_base_bdevs_operational": 3, 00:11:17.996 "base_bdevs_list": [ 00:11:17.996 { 00:11:17.996 "name": "spare", 00:11:17.996 "uuid": "65d5c909-26b8-5ec6-80de-98fee40b6585", 00:11:17.996 "is_configured": true, 00:11:17.996 "data_offset": 0, 00:11:17.996 "data_size": 65536 00:11:17.996 }, 00:11:17.996 { 00:11:17.996 "name": null, 00:11:17.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.996 "is_configured": false, 00:11:17.996 "data_offset": 0, 00:11:17.996 "data_size": 65536 00:11:17.996 }, 00:11:17.996 { 00:11:17.996 "name": "BaseBdev3", 00:11:17.996 "uuid": "3cf0eed0-30d9-547a-9e76-d8851ed2a37b", 00:11:17.996 "is_configured": true, 00:11:17.996 "data_offset": 0, 00:11:17.996 "data_size": 65536 00:11:17.996 }, 00:11:17.996 { 00:11:17.996 "name": "BaseBdev4", 00:11:17.996 "uuid": "94cb370a-12d3-5350-b474-5e76f3c85958", 00:11:17.996 "is_configured": true, 00:11:17.996 "data_offset": 0, 00:11:17.996 "data_size": 65536 00:11:17.996 } 00:11:17.996 ] 00:11:17.996 }' 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.996 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.255 [2024-10-30 09:45:56.678659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.255 [2024-10-30 09:45:56.678688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.255 [2024-10-30 09:45:56.678751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.255 [2024-10-30 09:45:56.678821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.255 [2024-10-30 09:45:56.678836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.255 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:18.513 /dev/nbd0 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.513 1+0 records in 00:11:18.513 1+0 records out 00:11:18.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289927 s, 14.1 MB/s 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.513 09:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:18.770 /dev/nbd1 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.770 1+0 records in 00:11:18.770 1+0 records out 00:11:18.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516519 s, 7.9 MB/s 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.770 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.027 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:19.284 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75493 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75493 ']' 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75493 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75493 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75493' 00:11:19.285 killing process with pid 75493 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75493 00:11:19.285 Received shutdown signal, test time was about 60.000000 seconds 00:11:19.285 00:11:19.285 Latency(us) 00:11:19.285 [2024-10-30T09:45:57.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.285 [2024-10-30T09:45:57.905Z] =================================================================================================================== 00:11:19.285 [2024-10-30T09:45:57.905Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:19.285 [2024-10-30 09:45:57.721165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.285 09:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75493 00:11:19.542 [2024-10-30 09:45:57.961828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:20.107 00:11:20.107 real 0m15.576s 00:11:20.107 user 0m16.915s 00:11:20.107 sys 0m2.459s 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.107 ************************************ 00:11:20.107 END TEST raid_rebuild_test 00:11:20.107 ************************************ 00:11:20.107 09:45:58 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:20.107 09:45:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:20.107 09:45:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.107 09:45:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.107 ************************************ 00:11:20.107 START TEST raid_rebuild_test_sb 00:11:20.107 ************************************ 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.107 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75925 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75925 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75925 ']' 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:20.108 09:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.108 [2024-10-30 09:45:58.645255] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:11:20.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:20.108 Zero copy mechanism will not be used. 00:11:20.108 [2024-10-30 09:45:58.645372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75925 ] 00:11:20.366 [2024-10-30 09:45:58.796120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.366 [2024-10-30 09:45:58.878680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.682 [2024-10-30 09:45:58.988247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.682 [2024-10-30 09:45:58.988284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.940 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 BaseBdev1_malloc 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 [2024-10-30 09:45:59.529955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:20.941 [2024-10-30 09:45:59.530015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.941 [2024-10-30 09:45:59.530038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:20.941 [2024-10-30 09:45:59.530052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.941 [2024-10-30 09:45:59.532398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.941 [2024-10-30 09:45:59.532441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.941 BaseBdev1 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 BaseBdev2_malloc 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.941 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [2024-10-30 09:45:59.561657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:21.199 [2024-10-30 09:45:59.561707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.199 [2024-10-30 09:45:59.561722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:21.199 [2024-10-30 09:45:59.561733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.199 [2024-10-30 09:45:59.563479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.199 [2024-10-30 09:45:59.563509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.199 BaseBdev2 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 BaseBdev3_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [2024-10-30 09:45:59.611855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:21.199 [2024-10-30 09:45:59.611906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.199 [2024-10-30 09:45:59.611924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:21.199 [2024-10-30 09:45:59.611933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.199 [2024-10-30 09:45:59.613634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.199 [2024-10-30 09:45:59.613664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:21.199 BaseBdev3 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 BaseBdev4_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [2024-10-30 09:45:59.643371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:21.199 [2024-10-30 09:45:59.643413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.199 [2024-10-30 09:45:59.643426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:21.199 [2024-10-30 09:45:59.643435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.199 [2024-10-30 09:45:59.645136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.199 [2024-10-30 09:45:59.645165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:21.199 BaseBdev4 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 spare_malloc 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 spare_delay 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [2024-10-30 09:45:59.682455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:21.199 [2024-10-30 09:45:59.682496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.199 [2024-10-30 09:45:59.682510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:21.199 [2024-10-30 09:45:59.682518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.199 [2024-10-30 09:45:59.684216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.199 [2024-10-30 09:45:59.684242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:21.199 spare 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.199 [2024-10-30 09:45:59.690497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.199 [2024-10-30 09:45:59.691953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.199 [2024-10-30 09:45:59.692011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.199 [2024-10-30 09:45:59.692052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.199 [2024-10-30 09:45:59.692209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:21.199 [2024-10-30 09:45:59.692227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.199 [2024-10-30 09:45:59.692424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:21.199 [2024-10-30 09:45:59.692558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:21.199 [2024-10-30 09:45:59.692570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:21.199 [2024-10-30 09:45:59.692684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.199 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.200 "name": "raid_bdev1", 00:11:21.200 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:21.200 "strip_size_kb": 0, 00:11:21.200 "state": "online", 00:11:21.200 "raid_level": "raid1", 00:11:21.200 "superblock": true, 00:11:21.200 "num_base_bdevs": 4, 00:11:21.200 "num_base_bdevs_discovered": 4, 00:11:21.200 "num_base_bdevs_operational": 4, 00:11:21.200 "base_bdevs_list": [ 00:11:21.200 { 00:11:21.200 "name": "BaseBdev1", 00:11:21.200 "uuid": "1b06b568-0b8e-5267-9b91-10b6d04ce044", 00:11:21.200 "is_configured": true, 00:11:21.200 "data_offset": 2048, 00:11:21.200 "data_size": 63488 00:11:21.200 }, 00:11:21.200 { 00:11:21.200 "name": "BaseBdev2", 00:11:21.200 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:21.200 "is_configured": true, 00:11:21.200 "data_offset": 2048, 00:11:21.200 "data_size": 63488 00:11:21.200 }, 00:11:21.200 { 00:11:21.200 "name": "BaseBdev3", 00:11:21.200 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:21.200 "is_configured": true, 00:11:21.200 "data_offset": 2048, 00:11:21.200 "data_size": 63488 00:11:21.200 }, 00:11:21.200 { 00:11:21.200 "name": "BaseBdev4", 00:11:21.200 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:21.200 "is_configured": true, 00:11:21.200 "data_offset": 2048, 00:11:21.200 "data_size": 63488 00:11:21.200 } 00:11:21.200 ] 00:11:21.200 }' 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.200 09:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.457 [2024-10-30 09:46:00.026842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.457 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:21.714 [2024-10-30 09:46:00.262639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:21.714 /dev/nbd0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.714 1+0 records in 00:11:21.714 1+0 records out 00:11:21.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302089 s, 13.6 MB/s 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:21.714 09:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:26.969 63488+0 records in 00:11:26.969 63488+0 records out 00:11:26.969 32505856 bytes (33 MB, 31 MiB) copied, 5.11265 s, 6.4 MB/s 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.969 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:27.227 [2024-10-30 09:46:05.625365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.227 [2024-10-30 09:46:05.649439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.227 "name": "raid_bdev1", 00:11:27.227 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:27.227 "strip_size_kb": 0, 00:11:27.227 "state": "online", 00:11:27.227 "raid_level": "raid1", 00:11:27.227 "superblock": true, 00:11:27.227 "num_base_bdevs": 4, 00:11:27.227 "num_base_bdevs_discovered": 3, 00:11:27.227 "num_base_bdevs_operational": 3, 00:11:27.227 "base_bdevs_list": [ 00:11:27.227 { 00:11:27.227 "name": null, 00:11:27.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.227 "is_configured": false, 00:11:27.227 "data_offset": 0, 00:11:27.227 "data_size": 63488 00:11:27.227 }, 00:11:27.227 { 00:11:27.227 "name": "BaseBdev2", 00:11:27.227 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:27.227 "is_configured": true, 00:11:27.227 "data_offset": 2048, 00:11:27.227 "data_size": 63488 00:11:27.227 }, 00:11:27.227 { 00:11:27.227 "name": "BaseBdev3", 00:11:27.227 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:27.227 "is_configured": true, 00:11:27.227 "data_offset": 2048, 00:11:27.227 "data_size": 63488 00:11:27.227 }, 00:11:27.227 { 00:11:27.227 "name": "BaseBdev4", 00:11:27.227 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:27.227 "is_configured": true, 00:11:27.227 "data_offset": 2048, 00:11:27.227 "data_size": 63488 00:11:27.227 } 00:11:27.227 ] 00:11:27.227 }' 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.227 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.484 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:27.484 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.484 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.484 [2024-10-30 09:46:05.981498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:27.484 [2024-10-30 09:46:05.989724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:11:27.484 09:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.484 09:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:27.484 [2024-10-30 09:46:05.991308] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.417 09:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.417 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.417 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.417 "name": "raid_bdev1", 00:11:28.417 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:28.417 "strip_size_kb": 0, 00:11:28.417 "state": "online", 00:11:28.417 "raid_level": "raid1", 00:11:28.417 "superblock": true, 00:11:28.417 "num_base_bdevs": 4, 00:11:28.417 "num_base_bdevs_discovered": 4, 00:11:28.417 "num_base_bdevs_operational": 4, 00:11:28.417 "process": { 00:11:28.417 "type": "rebuild", 00:11:28.417 "target": "spare", 00:11:28.417 "progress": { 00:11:28.417 "blocks": 20480, 00:11:28.417 "percent": 32 00:11:28.417 } 00:11:28.417 }, 00:11:28.417 "base_bdevs_list": [ 00:11:28.417 { 00:11:28.417 "name": "spare", 00:11:28.417 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:28.417 "is_configured": true, 00:11:28.417 "data_offset": 2048, 00:11:28.417 "data_size": 63488 00:11:28.417 }, 00:11:28.417 { 00:11:28.417 "name": "BaseBdev2", 00:11:28.417 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:28.417 "is_configured": true, 00:11:28.417 "data_offset": 2048, 00:11:28.417 "data_size": 63488 00:11:28.417 }, 00:11:28.417 { 00:11:28.417 "name": "BaseBdev3", 00:11:28.417 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:28.417 "is_configured": true, 00:11:28.417 "data_offset": 2048, 00:11:28.417 "data_size": 63488 00:11:28.417 }, 00:11:28.417 { 00:11:28.417 "name": "BaseBdev4", 00:11:28.417 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:28.418 "is_configured": true, 00:11:28.418 "data_offset": 2048, 00:11:28.418 "data_size": 63488 00:11:28.418 } 00:11:28.418 ] 00:11:28.418 }' 00:11:28.418 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.675 [2024-10-30 09:46:07.093590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.675 [2024-10-30 09:46:07.096372] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:28.675 [2024-10-30 09:46:07.096420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.675 [2024-10-30 09:46:07.096434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.675 [2024-10-30 09:46:07.096442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.675 "name": "raid_bdev1", 00:11:28.675 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:28.675 "strip_size_kb": 0, 00:11:28.675 "state": "online", 00:11:28.675 "raid_level": "raid1", 00:11:28.675 "superblock": true, 00:11:28.675 "num_base_bdevs": 4, 00:11:28.675 "num_base_bdevs_discovered": 3, 00:11:28.675 "num_base_bdevs_operational": 3, 00:11:28.675 "base_bdevs_list": [ 00:11:28.675 { 00:11:28.675 "name": null, 00:11:28.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.675 "is_configured": false, 00:11:28.675 "data_offset": 0, 00:11:28.675 "data_size": 63488 00:11:28.675 }, 00:11:28.675 { 00:11:28.675 "name": "BaseBdev2", 00:11:28.675 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:28.675 "is_configured": true, 00:11:28.675 "data_offset": 2048, 00:11:28.675 "data_size": 63488 00:11:28.675 }, 00:11:28.675 { 00:11:28.675 "name": "BaseBdev3", 00:11:28.675 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:28.675 "is_configured": true, 00:11:28.675 "data_offset": 2048, 00:11:28.675 "data_size": 63488 00:11:28.675 }, 00:11:28.675 { 00:11:28.675 "name": "BaseBdev4", 00:11:28.675 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:28.675 "is_configured": true, 00:11:28.675 "data_offset": 2048, 00:11:28.675 "data_size": 63488 00:11:28.675 } 00:11:28.675 ] 00:11:28.675 }' 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.675 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.933 "name": "raid_bdev1", 00:11:28.933 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:28.933 "strip_size_kb": 0, 00:11:28.933 "state": "online", 00:11:28.933 "raid_level": "raid1", 00:11:28.933 "superblock": true, 00:11:28.933 "num_base_bdevs": 4, 00:11:28.933 "num_base_bdevs_discovered": 3, 00:11:28.933 "num_base_bdevs_operational": 3, 00:11:28.933 "base_bdevs_list": [ 00:11:28.933 { 00:11:28.933 "name": null, 00:11:28.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.933 "is_configured": false, 00:11:28.933 "data_offset": 0, 00:11:28.933 "data_size": 63488 00:11:28.933 }, 00:11:28.933 { 00:11:28.933 "name": "BaseBdev2", 00:11:28.933 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:28.933 "is_configured": true, 00:11:28.933 "data_offset": 2048, 00:11:28.933 "data_size": 63488 00:11:28.933 }, 00:11:28.933 { 00:11:28.933 "name": "BaseBdev3", 00:11:28.933 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:28.933 "is_configured": true, 00:11:28.933 "data_offset": 2048, 00:11:28.933 "data_size": 63488 00:11:28.933 }, 00:11:28.933 { 00:11:28.933 "name": "BaseBdev4", 00:11:28.933 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:28.933 "is_configured": true, 00:11:28.933 "data_offset": 2048, 00:11:28.933 "data_size": 63488 00:11:28.933 } 00:11:28.933 ] 00:11:28.933 }' 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.933 [2024-10-30 09:46:07.516487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.933 [2024-10-30 09:46:07.524237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.933 09:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:28.933 [2024-10-30 09:46:07.525830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.305 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.305 "name": "raid_bdev1", 00:11:30.305 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:30.305 "strip_size_kb": 0, 00:11:30.305 "state": "online", 00:11:30.305 "raid_level": "raid1", 00:11:30.305 "superblock": true, 00:11:30.305 "num_base_bdevs": 4, 00:11:30.305 "num_base_bdevs_discovered": 4, 00:11:30.305 "num_base_bdevs_operational": 4, 00:11:30.305 "process": { 00:11:30.305 "type": "rebuild", 00:11:30.305 "target": "spare", 00:11:30.305 "progress": { 00:11:30.305 "blocks": 20480, 00:11:30.305 "percent": 32 00:11:30.305 } 00:11:30.305 }, 00:11:30.305 "base_bdevs_list": [ 00:11:30.305 { 00:11:30.305 "name": "spare", 00:11:30.305 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:30.305 "is_configured": true, 00:11:30.305 "data_offset": 2048, 00:11:30.305 "data_size": 63488 00:11:30.305 }, 00:11:30.305 { 00:11:30.305 "name": "BaseBdev2", 00:11:30.305 "uuid": "e979b2fd-e33f-5503-916e-456f1e9b73f7", 00:11:30.305 "is_configured": true, 00:11:30.305 "data_offset": 2048, 00:11:30.305 "data_size": 63488 00:11:30.305 }, 00:11:30.305 { 00:11:30.305 "name": "BaseBdev3", 00:11:30.305 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:30.305 "is_configured": true, 00:11:30.305 "data_offset": 2048, 00:11:30.305 "data_size": 63488 00:11:30.305 }, 00:11:30.305 { 00:11:30.305 "name": "BaseBdev4", 00:11:30.305 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:30.305 "is_configured": true, 00:11:30.305 "data_offset": 2048, 00:11:30.305 "data_size": 63488 00:11:30.305 } 00:11:30.306 ] 00:11:30.306 }' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:30.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.306 [2024-10-30 09:46:08.627987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.306 [2024-10-30 09:46:08.730775] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.306 "name": "raid_bdev1", 00:11:30.306 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:30.306 "strip_size_kb": 0, 00:11:30.306 "state": "online", 00:11:30.306 "raid_level": "raid1", 00:11:30.306 "superblock": true, 00:11:30.306 "num_base_bdevs": 4, 00:11:30.306 "num_base_bdevs_discovered": 3, 00:11:30.306 "num_base_bdevs_operational": 3, 00:11:30.306 "process": { 00:11:30.306 "type": "rebuild", 00:11:30.306 "target": "spare", 00:11:30.306 "progress": { 00:11:30.306 "blocks": 22528, 00:11:30.306 "percent": 35 00:11:30.306 } 00:11:30.306 }, 00:11:30.306 "base_bdevs_list": [ 00:11:30.306 { 00:11:30.306 "name": "spare", 00:11:30.306 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": null, 00:11:30.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.306 "is_configured": false, 00:11:30.306 "data_offset": 0, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": "BaseBdev3", 00:11:30.306 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": "BaseBdev4", 00:11:30.306 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 } 00:11:30.306 ] 00:11:30.306 }' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.306 "name": "raid_bdev1", 00:11:30.306 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:30.306 "strip_size_kb": 0, 00:11:30.306 "state": "online", 00:11:30.306 "raid_level": "raid1", 00:11:30.306 "superblock": true, 00:11:30.306 "num_base_bdevs": 4, 00:11:30.306 "num_base_bdevs_discovered": 3, 00:11:30.306 "num_base_bdevs_operational": 3, 00:11:30.306 "process": { 00:11:30.306 "type": "rebuild", 00:11:30.306 "target": "spare", 00:11:30.306 "progress": { 00:11:30.306 "blocks": 24576, 00:11:30.306 "percent": 38 00:11:30.306 } 00:11:30.306 }, 00:11:30.306 "base_bdevs_list": [ 00:11:30.306 { 00:11:30.306 "name": "spare", 00:11:30.306 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": null, 00:11:30.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.306 "is_configured": false, 00:11:30.306 "data_offset": 0, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": "BaseBdev3", 00:11:30.306 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 }, 00:11:30.306 { 00:11:30.306 "name": "BaseBdev4", 00:11:30.306 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:30.306 "is_configured": true, 00:11:30.306 "data_offset": 2048, 00:11:30.306 "data_size": 63488 00:11:30.306 } 00:11:30.306 ] 00:11:30.306 }' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.306 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.564 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.564 09:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.496 "name": "raid_bdev1", 00:11:31.496 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:31.496 "strip_size_kb": 0, 00:11:31.496 "state": "online", 00:11:31.496 "raid_level": "raid1", 00:11:31.496 "superblock": true, 00:11:31.496 "num_base_bdevs": 4, 00:11:31.496 "num_base_bdevs_discovered": 3, 00:11:31.496 "num_base_bdevs_operational": 3, 00:11:31.496 "process": { 00:11:31.496 "type": "rebuild", 00:11:31.496 "target": "spare", 00:11:31.496 "progress": { 00:11:31.496 "blocks": 47104, 00:11:31.496 "percent": 74 00:11:31.496 } 00:11:31.496 }, 00:11:31.496 "base_bdevs_list": [ 00:11:31.496 { 00:11:31.496 "name": "spare", 00:11:31.496 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:31.496 "is_configured": true, 00:11:31.496 "data_offset": 2048, 00:11:31.496 "data_size": 63488 00:11:31.496 }, 00:11:31.496 { 00:11:31.496 "name": null, 00:11:31.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.496 "is_configured": false, 00:11:31.496 "data_offset": 0, 00:11:31.496 "data_size": 63488 00:11:31.496 }, 00:11:31.496 { 00:11:31.496 "name": "BaseBdev3", 00:11:31.496 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:31.496 "is_configured": true, 00:11:31.496 "data_offset": 2048, 00:11:31.496 "data_size": 63488 00:11:31.496 }, 00:11:31.496 { 00:11:31.496 "name": "BaseBdev4", 00:11:31.496 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:31.496 "is_configured": true, 00:11:31.496 "data_offset": 2048, 00:11:31.496 "data_size": 63488 00:11:31.496 } 00:11:31.496 ] 00:11:31.496 }' 00:11:31.496 09:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.496 09:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:31.496 09:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.496 09:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.496 09:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:32.141 [2024-10-30 09:46:10.739827] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:32.141 [2024-10-30 09:46:10.739927] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:32.141 [2024-10-30 09:46:10.740082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.706 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.707 "name": "raid_bdev1", 00:11:32.707 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:32.707 "strip_size_kb": 0, 00:11:32.707 "state": "online", 00:11:32.707 "raid_level": "raid1", 00:11:32.707 "superblock": true, 00:11:32.707 "num_base_bdevs": 4, 00:11:32.707 "num_base_bdevs_discovered": 3, 00:11:32.707 "num_base_bdevs_operational": 3, 00:11:32.707 "base_bdevs_list": [ 00:11:32.707 { 00:11:32.707 "name": "spare", 00:11:32.707 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": null, 00:11:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.707 "is_configured": false, 00:11:32.707 "data_offset": 0, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev3", 00:11:32.707 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev4", 00:11:32.707 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 } 00:11:32.707 ] 00:11:32.707 }' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.707 "name": "raid_bdev1", 00:11:32.707 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:32.707 "strip_size_kb": 0, 00:11:32.707 "state": "online", 00:11:32.707 "raid_level": "raid1", 00:11:32.707 "superblock": true, 00:11:32.707 "num_base_bdevs": 4, 00:11:32.707 "num_base_bdevs_discovered": 3, 00:11:32.707 "num_base_bdevs_operational": 3, 00:11:32.707 "base_bdevs_list": [ 00:11:32.707 { 00:11:32.707 "name": "spare", 00:11:32.707 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": null, 00:11:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.707 "is_configured": false, 00:11:32.707 "data_offset": 0, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev3", 00:11:32.707 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev4", 00:11:32.707 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 } 00:11:32.707 ] 00:11:32.707 }' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.707 "name": "raid_bdev1", 00:11:32.707 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:32.707 "strip_size_kb": 0, 00:11:32.707 "state": "online", 00:11:32.707 "raid_level": "raid1", 00:11:32.707 "superblock": true, 00:11:32.707 "num_base_bdevs": 4, 00:11:32.707 "num_base_bdevs_discovered": 3, 00:11:32.707 "num_base_bdevs_operational": 3, 00:11:32.707 "base_bdevs_list": [ 00:11:32.707 { 00:11:32.707 "name": "spare", 00:11:32.707 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": null, 00:11:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.707 "is_configured": false, 00:11:32.707 "data_offset": 0, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev3", 00:11:32.707 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "name": "BaseBdev4", 00:11:32.707 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:32.707 "is_configured": true, 00:11:32.707 "data_offset": 2048, 00:11:32.707 "data_size": 63488 00:11:32.707 } 00:11:32.707 ] 00:11:32.707 }' 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.707 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.965 [2024-10-30 09:46:11.512695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.965 [2024-10-30 09:46:11.512728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.965 [2024-10-30 09:46:11.512796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.965 [2024-10-30 09:46:11.512867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.965 [2024-10-30 09:46:11.512875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:32.965 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:32.966 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:33.224 /dev/nbd0 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.224 1+0 records in 00:11:33.224 1+0 records out 00:11:33.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233418 s, 17.5 MB/s 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.224 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:33.482 /dev/nbd1 00:11:33.482 09:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.482 1+0 records in 00:11:33.482 1+0 records out 00:11:33.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275546 s, 14.9 MB/s 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.482 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:33.739 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.740 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:33.997 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.998 [2024-10-30 09:46:12.554693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:33.998 [2024-10-30 09:46:12.554742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.998 [2024-10-30 09:46:12.554762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:33.998 [2024-10-30 09:46:12.554770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.998 [2024-10-30 09:46:12.556637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.998 [2024-10-30 09:46:12.556674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:33.998 [2024-10-30 09:46:12.556753] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:33.998 [2024-10-30 09:46:12.556791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.998 [2024-10-30 09:46:12.556922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.998 [2024-10-30 09:46:12.557053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.998 spare 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.998 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.256 [2024-10-30 09:46:12.657152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:34.256 [2024-10-30 09:46:12.657195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.256 [2024-10-30 09:46:12.657490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:11:34.256 [2024-10-30 09:46:12.657658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:34.256 [2024-10-30 09:46:12.657679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:34.256 [2024-10-30 09:46:12.657822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.256 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.256 "name": "raid_bdev1", 00:11:34.256 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:34.256 "strip_size_kb": 0, 00:11:34.256 "state": "online", 00:11:34.256 "raid_level": "raid1", 00:11:34.256 "superblock": true, 00:11:34.256 "num_base_bdevs": 4, 00:11:34.256 "num_base_bdevs_discovered": 3, 00:11:34.256 "num_base_bdevs_operational": 3, 00:11:34.256 "base_bdevs_list": [ 00:11:34.256 { 00:11:34.256 "name": "spare", 00:11:34.256 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:34.256 "is_configured": true, 00:11:34.256 "data_offset": 2048, 00:11:34.256 "data_size": 63488 00:11:34.256 }, 00:11:34.256 { 00:11:34.256 "name": null, 00:11:34.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.256 "is_configured": false, 00:11:34.256 "data_offset": 2048, 00:11:34.256 "data_size": 63488 00:11:34.256 }, 00:11:34.256 { 00:11:34.256 "name": "BaseBdev3", 00:11:34.256 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:34.256 "is_configured": true, 00:11:34.256 "data_offset": 2048, 00:11:34.256 "data_size": 63488 00:11:34.256 }, 00:11:34.256 { 00:11:34.256 "name": "BaseBdev4", 00:11:34.257 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:34.257 "is_configured": true, 00:11:34.257 "data_offset": 2048, 00:11:34.257 "data_size": 63488 00:11:34.257 } 00:11:34.257 ] 00:11:34.257 }' 00:11:34.257 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.257 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.514 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.514 "name": "raid_bdev1", 00:11:34.514 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:34.514 "strip_size_kb": 0, 00:11:34.514 "state": "online", 00:11:34.514 "raid_level": "raid1", 00:11:34.514 "superblock": true, 00:11:34.514 "num_base_bdevs": 4, 00:11:34.514 "num_base_bdevs_discovered": 3, 00:11:34.514 "num_base_bdevs_operational": 3, 00:11:34.514 "base_bdevs_list": [ 00:11:34.514 { 00:11:34.514 "name": "spare", 00:11:34.514 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:34.514 "is_configured": true, 00:11:34.514 "data_offset": 2048, 00:11:34.514 "data_size": 63488 00:11:34.514 }, 00:11:34.514 { 00:11:34.514 "name": null, 00:11:34.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.514 "is_configured": false, 00:11:34.514 "data_offset": 2048, 00:11:34.514 "data_size": 63488 00:11:34.514 }, 00:11:34.514 { 00:11:34.514 "name": "BaseBdev3", 00:11:34.514 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:34.514 "is_configured": true, 00:11:34.514 "data_offset": 2048, 00:11:34.515 "data_size": 63488 00:11:34.515 }, 00:11:34.515 { 00:11:34.515 "name": "BaseBdev4", 00:11:34.515 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:34.515 "is_configured": true, 00:11:34.515 "data_offset": 2048, 00:11:34.515 "data_size": 63488 00:11:34.515 } 00:11:34.515 ] 00:11:34.515 }' 00:11:34.515 09:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 [2024-10-30 09:46:13.086855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.515 "name": "raid_bdev1", 00:11:34.515 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:34.515 "strip_size_kb": 0, 00:11:34.515 "state": "online", 00:11:34.515 "raid_level": "raid1", 00:11:34.515 "superblock": true, 00:11:34.515 "num_base_bdevs": 4, 00:11:34.515 "num_base_bdevs_discovered": 2, 00:11:34.515 "num_base_bdevs_operational": 2, 00:11:34.515 "base_bdevs_list": [ 00:11:34.515 { 00:11:34.515 "name": null, 00:11:34.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.515 "is_configured": false, 00:11:34.515 "data_offset": 0, 00:11:34.515 "data_size": 63488 00:11:34.515 }, 00:11:34.515 { 00:11:34.515 "name": null, 00:11:34.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.515 "is_configured": false, 00:11:34.515 "data_offset": 2048, 00:11:34.515 "data_size": 63488 00:11:34.515 }, 00:11:34.515 { 00:11:34.515 "name": "BaseBdev3", 00:11:34.515 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:34.515 "is_configured": true, 00:11:34.515 "data_offset": 2048, 00:11:34.515 "data_size": 63488 00:11:34.515 }, 00:11:34.515 { 00:11:34.515 "name": "BaseBdev4", 00:11:34.515 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:34.515 "is_configured": true, 00:11:34.515 "data_offset": 2048, 00:11:34.515 "data_size": 63488 00:11:34.515 } 00:11:34.515 ] 00:11:34.515 }' 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.515 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.080 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:35.080 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.080 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.080 [2024-10-30 09:46:13.414913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:35.080 [2024-10-30 09:46:13.415087] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:35.080 [2024-10-30 09:46:13.415104] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:35.080 [2024-10-30 09:46:13.415140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:35.080 [2024-10-30 09:46:13.423096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:11:35.080 [2024-10-30 09:46:13.424686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.080 09:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.080 09:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.079 "name": "raid_bdev1", 00:11:36.079 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:36.079 "strip_size_kb": 0, 00:11:36.079 "state": "online", 00:11:36.079 "raid_level": "raid1", 00:11:36.079 "superblock": true, 00:11:36.079 "num_base_bdevs": 4, 00:11:36.079 "num_base_bdevs_discovered": 3, 00:11:36.079 "num_base_bdevs_operational": 3, 00:11:36.079 "process": { 00:11:36.079 "type": "rebuild", 00:11:36.079 "target": "spare", 00:11:36.079 "progress": { 00:11:36.079 "blocks": 20480, 00:11:36.079 "percent": 32 00:11:36.079 } 00:11:36.079 }, 00:11:36.079 "base_bdevs_list": [ 00:11:36.079 { 00:11:36.079 "name": "spare", 00:11:36.079 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:36.079 "is_configured": true, 00:11:36.079 "data_offset": 2048, 00:11:36.079 "data_size": 63488 00:11:36.079 }, 00:11:36.079 { 00:11:36.079 "name": null, 00:11:36.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.079 "is_configured": false, 00:11:36.079 "data_offset": 2048, 00:11:36.079 "data_size": 63488 00:11:36.079 }, 00:11:36.079 { 00:11:36.079 "name": "BaseBdev3", 00:11:36.079 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:36.079 "is_configured": true, 00:11:36.079 "data_offset": 2048, 00:11:36.079 "data_size": 63488 00:11:36.079 }, 00:11:36.079 { 00:11:36.079 "name": "BaseBdev4", 00:11:36.079 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:36.079 "is_configured": true, 00:11:36.079 "data_offset": 2048, 00:11:36.079 "data_size": 63488 00:11:36.079 } 00:11:36.079 ] 00:11:36.079 }' 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.079 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.080 [2024-10-30 09:46:14.531085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.080 [2024-10-30 09:46:14.630288] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:36.080 [2024-10-30 09:46:14.630360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.080 [2024-10-30 09:46:14.630375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.080 [2024-10-30 09:46:14.630381] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.080 "name": "raid_bdev1", 00:11:36.080 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:36.080 "strip_size_kb": 0, 00:11:36.080 "state": "online", 00:11:36.080 "raid_level": "raid1", 00:11:36.080 "superblock": true, 00:11:36.080 "num_base_bdevs": 4, 00:11:36.080 "num_base_bdevs_discovered": 2, 00:11:36.080 "num_base_bdevs_operational": 2, 00:11:36.080 "base_bdevs_list": [ 00:11:36.080 { 00:11:36.080 "name": null, 00:11:36.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.080 "is_configured": false, 00:11:36.080 "data_offset": 0, 00:11:36.080 "data_size": 63488 00:11:36.080 }, 00:11:36.080 { 00:11:36.080 "name": null, 00:11:36.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.080 "is_configured": false, 00:11:36.080 "data_offset": 2048, 00:11:36.080 "data_size": 63488 00:11:36.080 }, 00:11:36.080 { 00:11:36.080 "name": "BaseBdev3", 00:11:36.080 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:36.080 "is_configured": true, 00:11:36.080 "data_offset": 2048, 00:11:36.080 "data_size": 63488 00:11:36.080 }, 00:11:36.080 { 00:11:36.080 "name": "BaseBdev4", 00:11:36.080 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:36.080 "is_configured": true, 00:11:36.080 "data_offset": 2048, 00:11:36.080 "data_size": 63488 00:11:36.080 } 00:11:36.080 ] 00:11:36.080 }' 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.080 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.338 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:36.595 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.595 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.595 [2024-10-30 09:46:14.962542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:36.595 [2024-10-30 09:46:14.962601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.595 [2024-10-30 09:46:14.962622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:11:36.595 [2024-10-30 09:46:14.962630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.595 [2024-10-30 09:46:14.963010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.595 [2024-10-30 09:46:14.963035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:36.595 [2024-10-30 09:46:14.963120] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:36.595 [2024-10-30 09:46:14.963130] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:36.595 [2024-10-30 09:46:14.963143] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:36.595 [2024-10-30 09:46:14.963166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.595 [2024-10-30 09:46:14.970869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:11:36.595 spare 00:11:36.595 09:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.595 09:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:36.595 [2024-10-30 09:46:14.972465] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.527 09:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.527 "name": "raid_bdev1", 00:11:37.527 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:37.527 "strip_size_kb": 0, 00:11:37.527 "state": "online", 00:11:37.527 "raid_level": "raid1", 00:11:37.527 "superblock": true, 00:11:37.527 "num_base_bdevs": 4, 00:11:37.527 "num_base_bdevs_discovered": 3, 00:11:37.527 "num_base_bdevs_operational": 3, 00:11:37.527 "process": { 00:11:37.527 "type": "rebuild", 00:11:37.527 "target": "spare", 00:11:37.527 "progress": { 00:11:37.527 "blocks": 20480, 00:11:37.527 "percent": 32 00:11:37.527 } 00:11:37.527 }, 00:11:37.527 "base_bdevs_list": [ 00:11:37.527 { 00:11:37.527 "name": "spare", 00:11:37.527 "uuid": "37d3d746-a3d4-5bee-a4be-818ae3c74565", 00:11:37.527 "is_configured": true, 00:11:37.527 "data_offset": 2048, 00:11:37.527 "data_size": 63488 00:11:37.527 }, 00:11:37.527 { 00:11:37.527 "name": null, 00:11:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.527 "is_configured": false, 00:11:37.527 "data_offset": 2048, 00:11:37.527 "data_size": 63488 00:11:37.527 }, 00:11:37.527 { 00:11:37.527 "name": "BaseBdev3", 00:11:37.527 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:37.527 "is_configured": true, 00:11:37.527 "data_offset": 2048, 00:11:37.527 "data_size": 63488 00:11:37.527 }, 00:11:37.527 { 00:11:37.527 "name": "BaseBdev4", 00:11:37.527 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:37.527 "is_configured": true, 00:11:37.527 "data_offset": 2048, 00:11:37.527 "data_size": 63488 00:11:37.527 } 00:11:37.527 ] 00:11:37.527 }' 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.527 [2024-10-30 09:46:16.070788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.527 [2024-10-30 09:46:16.077644] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.527 [2024-10-30 09:46:16.077695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.527 [2024-10-30 09:46:16.077708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.527 [2024-10-30 09:46:16.077715] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.527 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.527 "name": "raid_bdev1", 00:11:37.527 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:37.527 "strip_size_kb": 0, 00:11:37.527 "state": "online", 00:11:37.527 "raid_level": "raid1", 00:11:37.527 "superblock": true, 00:11:37.527 "num_base_bdevs": 4, 00:11:37.528 "num_base_bdevs_discovered": 2, 00:11:37.528 "num_base_bdevs_operational": 2, 00:11:37.528 "base_bdevs_list": [ 00:11:37.528 { 00:11:37.528 "name": null, 00:11:37.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.528 "is_configured": false, 00:11:37.528 "data_offset": 0, 00:11:37.528 "data_size": 63488 00:11:37.528 }, 00:11:37.528 { 00:11:37.528 "name": null, 00:11:37.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.528 "is_configured": false, 00:11:37.528 "data_offset": 2048, 00:11:37.528 "data_size": 63488 00:11:37.528 }, 00:11:37.528 { 00:11:37.528 "name": "BaseBdev3", 00:11:37.528 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:37.528 "is_configured": true, 00:11:37.528 "data_offset": 2048, 00:11:37.528 "data_size": 63488 00:11:37.528 }, 00:11:37.528 { 00:11:37.528 "name": "BaseBdev4", 00:11:37.528 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:37.528 "is_configured": true, 00:11:37.528 "data_offset": 2048, 00:11:37.528 "data_size": 63488 00:11:37.528 } 00:11:37.528 ] 00:11:37.528 }' 00:11:37.528 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.528 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.093 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.093 "name": "raid_bdev1", 00:11:38.093 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:38.093 "strip_size_kb": 0, 00:11:38.093 "state": "online", 00:11:38.093 "raid_level": "raid1", 00:11:38.093 "superblock": true, 00:11:38.093 "num_base_bdevs": 4, 00:11:38.093 "num_base_bdevs_discovered": 2, 00:11:38.093 "num_base_bdevs_operational": 2, 00:11:38.093 "base_bdevs_list": [ 00:11:38.093 { 00:11:38.093 "name": null, 00:11:38.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.093 "is_configured": false, 00:11:38.093 "data_offset": 0, 00:11:38.093 "data_size": 63488 00:11:38.093 }, 00:11:38.093 { 00:11:38.093 "name": null, 00:11:38.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.093 "is_configured": false, 00:11:38.093 "data_offset": 2048, 00:11:38.094 "data_size": 63488 00:11:38.094 }, 00:11:38.094 { 00:11:38.094 "name": "BaseBdev3", 00:11:38.094 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:38.094 "is_configured": true, 00:11:38.094 "data_offset": 2048, 00:11:38.094 "data_size": 63488 00:11:38.094 }, 00:11:38.094 { 00:11:38.094 "name": "BaseBdev4", 00:11:38.094 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:38.094 "is_configured": true, 00:11:38.094 "data_offset": 2048, 00:11:38.094 "data_size": 63488 00:11:38.094 } 00:11:38.094 ] 00:11:38.094 }' 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.094 [2024-10-30 09:46:16.521787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:38.094 [2024-10-30 09:46:16.521840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.094 [2024-10-30 09:46:16.521855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:11:38.094 [2024-10-30 09:46:16.521865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.094 [2024-10-30 09:46:16.522229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.094 [2024-10-30 09:46:16.522249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.094 [2024-10-30 09:46:16.522311] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:38.094 [2024-10-30 09:46:16.522323] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:38.094 [2024-10-30 09:46:16.522329] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:38.094 [2024-10-30 09:46:16.522340] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:38.094 BaseBdev1 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.094 09:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.028 "name": "raid_bdev1", 00:11:39.028 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:39.028 "strip_size_kb": 0, 00:11:39.028 "state": "online", 00:11:39.028 "raid_level": "raid1", 00:11:39.028 "superblock": true, 00:11:39.028 "num_base_bdevs": 4, 00:11:39.028 "num_base_bdevs_discovered": 2, 00:11:39.028 "num_base_bdevs_operational": 2, 00:11:39.028 "base_bdevs_list": [ 00:11:39.028 { 00:11:39.028 "name": null, 00:11:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.028 "is_configured": false, 00:11:39.028 "data_offset": 0, 00:11:39.028 "data_size": 63488 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": null, 00:11:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.028 "is_configured": false, 00:11:39.028 "data_offset": 2048, 00:11:39.028 "data_size": 63488 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": "BaseBdev3", 00:11:39.028 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:39.028 "is_configured": true, 00:11:39.028 "data_offset": 2048, 00:11:39.028 "data_size": 63488 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": "BaseBdev4", 00:11:39.028 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:39.028 "is_configured": true, 00:11:39.028 "data_offset": 2048, 00:11:39.028 "data_size": 63488 00:11:39.028 } 00:11:39.028 ] 00:11:39.028 }' 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.028 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.388 "name": "raid_bdev1", 00:11:39.388 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:39.388 "strip_size_kb": 0, 00:11:39.388 "state": "online", 00:11:39.388 "raid_level": "raid1", 00:11:39.388 "superblock": true, 00:11:39.388 "num_base_bdevs": 4, 00:11:39.388 "num_base_bdevs_discovered": 2, 00:11:39.388 "num_base_bdevs_operational": 2, 00:11:39.388 "base_bdevs_list": [ 00:11:39.388 { 00:11:39.388 "name": null, 00:11:39.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.388 "is_configured": false, 00:11:39.388 "data_offset": 0, 00:11:39.388 "data_size": 63488 00:11:39.388 }, 00:11:39.388 { 00:11:39.388 "name": null, 00:11:39.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.388 "is_configured": false, 00:11:39.388 "data_offset": 2048, 00:11:39.388 "data_size": 63488 00:11:39.388 }, 00:11:39.388 { 00:11:39.388 "name": "BaseBdev3", 00:11:39.388 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:39.388 "is_configured": true, 00:11:39.388 "data_offset": 2048, 00:11:39.388 "data_size": 63488 00:11:39.388 }, 00:11:39.388 { 00:11:39.388 "name": "BaseBdev4", 00:11:39.388 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:39.388 "is_configured": true, 00:11:39.388 "data_offset": 2048, 00:11:39.388 "data_size": 63488 00:11:39.388 } 00:11:39.388 ] 00:11:39.388 }' 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.388 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 [2024-10-30 09:46:17.934072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.388 [2024-10-30 09:46:17.934229] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:39.389 [2024-10-30 09:46:17.934245] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:39.389 request: 00:11:39.389 { 00:11:39.389 "base_bdev": "BaseBdev1", 00:11:39.389 "raid_bdev": "raid_bdev1", 00:11:39.389 "method": "bdev_raid_add_base_bdev", 00:11:39.389 "req_id": 1 00:11:39.389 } 00:11:39.389 Got JSON-RPC error response 00:11:39.389 response: 00:11:39.389 { 00:11:39.389 "code": -22, 00:11:39.389 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:39.389 } 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.389 09:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.761 "name": "raid_bdev1", 00:11:40.761 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:40.761 "strip_size_kb": 0, 00:11:40.761 "state": "online", 00:11:40.761 "raid_level": "raid1", 00:11:40.761 "superblock": true, 00:11:40.761 "num_base_bdevs": 4, 00:11:40.761 "num_base_bdevs_discovered": 2, 00:11:40.761 "num_base_bdevs_operational": 2, 00:11:40.761 "base_bdevs_list": [ 00:11:40.761 { 00:11:40.761 "name": null, 00:11:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.761 "is_configured": false, 00:11:40.761 "data_offset": 0, 00:11:40.761 "data_size": 63488 00:11:40.761 }, 00:11:40.761 { 00:11:40.761 "name": null, 00:11:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.761 "is_configured": false, 00:11:40.761 "data_offset": 2048, 00:11:40.761 "data_size": 63488 00:11:40.761 }, 00:11:40.761 { 00:11:40.761 "name": "BaseBdev3", 00:11:40.761 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:40.761 "is_configured": true, 00:11:40.761 "data_offset": 2048, 00:11:40.761 "data_size": 63488 00:11:40.761 }, 00:11:40.761 { 00:11:40.761 "name": "BaseBdev4", 00:11:40.761 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:40.761 "is_configured": true, 00:11:40.761 "data_offset": 2048, 00:11:40.761 "data_size": 63488 00:11:40.761 } 00:11:40.761 ] 00:11:40.761 }' 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.761 09:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.761 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.762 "name": "raid_bdev1", 00:11:40.762 "uuid": "658b1f89-0c56-4049-b047-38c095037d5a", 00:11:40.762 "strip_size_kb": 0, 00:11:40.762 "state": "online", 00:11:40.762 "raid_level": "raid1", 00:11:40.762 "superblock": true, 00:11:40.762 "num_base_bdevs": 4, 00:11:40.762 "num_base_bdevs_discovered": 2, 00:11:40.762 "num_base_bdevs_operational": 2, 00:11:40.762 "base_bdevs_list": [ 00:11:40.762 { 00:11:40.762 "name": null, 00:11:40.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.762 "is_configured": false, 00:11:40.762 "data_offset": 0, 00:11:40.762 "data_size": 63488 00:11:40.762 }, 00:11:40.762 { 00:11:40.762 "name": null, 00:11:40.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.762 "is_configured": false, 00:11:40.762 "data_offset": 2048, 00:11:40.762 "data_size": 63488 00:11:40.762 }, 00:11:40.762 { 00:11:40.762 "name": "BaseBdev3", 00:11:40.762 "uuid": "922b4b74-96cd-5ca4-bcf7-01fd5c3e100d", 00:11:40.762 "is_configured": true, 00:11:40.762 "data_offset": 2048, 00:11:40.762 "data_size": 63488 00:11:40.762 }, 00:11:40.762 { 00:11:40.762 "name": "BaseBdev4", 00:11:40.762 "uuid": "bd4b019b-f8f6-5b88-a78c-2ce78f89000c", 00:11:40.762 "is_configured": true, 00:11:40.762 "data_offset": 2048, 00:11:40.762 "data_size": 63488 00:11:40.762 } 00:11:40.762 ] 00:11:40.762 }' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75925 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75925 ']' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75925 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75925 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.762 killing process with pid 75925 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75925' 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75925 00:11:40.762 Received shutdown signal, test time was about 60.000000 seconds 00:11:40.762 00:11:40.762 Latency(us) 00:11:40.762 [2024-10-30T09:46:19.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.762 [2024-10-30T09:46:19.382Z] =================================================================================================================== 00:11:40.762 [2024-10-30T09:46:19.382Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.762 [2024-10-30 09:46:19.363610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.762 09:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75925 00:11:40.762 [2024-10-30 09:46:19.363708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.762 [2024-10-30 09:46:19.363765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.762 [2024-10-30 09:46:19.363780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:41.020 [2024-10-30 09:46:19.607684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.581 09:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:41.581 00:11:41.581 real 0m21.607s 00:11:41.581 user 0m25.266s 00:11:41.581 sys 0m2.929s 00:11:41.581 09:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.581 09:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.581 ************************************ 00:11:41.581 END TEST raid_rebuild_test_sb 00:11:41.581 ************************************ 00:11:41.839 09:46:20 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:11:41.839 09:46:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:41.839 09:46:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.839 09:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.839 ************************************ 00:11:41.839 START TEST raid_rebuild_test_io 00:11:41.839 ************************************ 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76651 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76651 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76651 ']' 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.839 09:46:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:41.839 [2024-10-30 09:46:20.293347] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:11:41.839 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.839 Zero copy mechanism will not be used. 00:11:41.839 [2024-10-30 09:46:20.293477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76651 ] 00:11:41.839 [2024-10-30 09:46:20.451317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.097 [2024-10-30 09:46:20.552937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.097 [2024-10-30 09:46:20.692660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.097 [2024-10-30 09:46:20.692706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.664 BaseBdev1_malloc 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.664 [2024-10-30 09:46:21.203435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:42.664 [2024-10-30 09:46:21.203500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.664 [2024-10-30 09:46:21.203520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:42.664 [2024-10-30 09:46:21.203532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.664 [2024-10-30 09:46:21.205669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.664 [2024-10-30 09:46:21.205709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.664 BaseBdev1 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.664 BaseBdev2_malloc 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.664 [2024-10-30 09:46:21.239729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:42.664 [2024-10-30 09:46:21.239789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.664 [2024-10-30 09:46:21.239808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:42.664 [2024-10-30 09:46:21.239820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.664 [2024-10-30 09:46:21.241956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.664 [2024-10-30 09:46:21.241994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.664 BaseBdev2 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.664 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 BaseBdev3_malloc 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 [2024-10-30 09:46:21.294642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:42.926 [2024-10-30 09:46:21.294705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.926 [2024-10-30 09:46:21.294727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:42.926 [2024-10-30 09:46:21.294738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.926 [2024-10-30 09:46:21.296988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.926 [2024-10-30 09:46:21.297032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.926 BaseBdev3 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 BaseBdev4_malloc 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 [2024-10-30 09:46:21.338859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:42.926 [2024-10-30 09:46:21.338915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.926 [2024-10-30 09:46:21.338936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:42.926 [2024-10-30 09:46:21.338948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.926 [2024-10-30 09:46:21.341089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.926 [2024-10-30 09:46:21.341122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:42.926 BaseBdev4 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 spare_malloc 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 spare_delay 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.926 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.926 [2024-10-30 09:46:21.383115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:42.926 [2024-10-30 09:46:21.383174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.926 [2024-10-30 09:46:21.383191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:42.927 [2024-10-30 09:46:21.383201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.927 [2024-10-30 09:46:21.385346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.927 [2024-10-30 09:46:21.385381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:42.927 spare 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 [2024-10-30 09:46:21.391161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.927 [2024-10-30 09:46:21.393004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.927 [2024-10-30 09:46:21.393083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.927 [2024-10-30 09:46:21.393135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.927 [2024-10-30 09:46:21.393221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.927 [2024-10-30 09:46:21.393261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.927 [2024-10-30 09:46:21.393530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.927 [2024-10-30 09:46:21.393688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.927 [2024-10-30 09:46:21.393705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:42.927 [2024-10-30 09:46:21.393854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.927 "name": "raid_bdev1", 00:11:42.927 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:42.927 "strip_size_kb": 0, 00:11:42.927 "state": "online", 00:11:42.927 "raid_level": "raid1", 00:11:42.927 "superblock": false, 00:11:42.927 "num_base_bdevs": 4, 00:11:42.927 "num_base_bdevs_discovered": 4, 00:11:42.927 "num_base_bdevs_operational": 4, 00:11:42.927 "base_bdevs_list": [ 00:11:42.927 { 00:11:42.927 "name": "BaseBdev1", 00:11:42.927 "uuid": "398ed6aa-2970-51b1-9221-2843343c10a6", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev2", 00:11:42.927 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev3", 00:11:42.927 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev4", 00:11:42.927 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 } 00:11:42.927 ] 00:11:42.927 }' 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.927 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 [2024-10-30 09:46:21.711588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:43.309 [2024-10-30 09:46:21.767227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.309 "name": "raid_bdev1", 00:11:43.309 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:43.309 "strip_size_kb": 0, 00:11:43.309 "state": "online", 00:11:43.309 "raid_level": "raid1", 00:11:43.309 "superblock": false, 00:11:43.309 "num_base_bdevs": 4, 00:11:43.309 "num_base_bdevs_discovered": 3, 00:11:43.309 "num_base_bdevs_operational": 3, 00:11:43.309 "base_bdevs_list": [ 00:11:43.309 { 00:11:43.309 "name": null, 00:11:43.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.309 "is_configured": false, 00:11:43.309 "data_offset": 0, 00:11:43.309 "data_size": 65536 00:11:43.309 }, 00:11:43.309 { 00:11:43.309 "name": "BaseBdev2", 00:11:43.309 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:43.309 "is_configured": true, 00:11:43.309 "data_offset": 0, 00:11:43.309 "data_size": 65536 00:11:43.309 }, 00:11:43.309 { 00:11:43.309 "name": "BaseBdev3", 00:11:43.309 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:43.309 "is_configured": true, 00:11:43.309 "data_offset": 0, 00:11:43.309 "data_size": 65536 00:11:43.309 }, 00:11:43.309 { 00:11:43.309 "name": "BaseBdev4", 00:11:43.309 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:43.309 "is_configured": true, 00:11:43.309 "data_offset": 0, 00:11:43.309 "data_size": 65536 00:11:43.309 } 00:11:43.309 ] 00:11:43.309 }' 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.309 09:46:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.309 [2024-10-30 09:46:21.856562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:43.309 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:43.309 Zero copy mechanism will not be used. 00:11:43.309 Running I/O for 60 seconds... 00:11:43.568 09:46:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:43.568 09:46:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.568 09:46:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.568 [2024-10-30 09:46:22.101202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:43.568 09:46:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.568 09:46:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:43.568 [2024-10-30 09:46:22.170362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:43.568 [2024-10-30 09:46:22.172312] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:43.825 [2024-10-30 09:46:22.299090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:43.825 [2024-10-30 09:46:22.442196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:44.392 [2024-10-30 09:46:22.804413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:44.650 159.00 IOPS, 477.00 MiB/s [2024-10-30T09:46:23.270Z] [2024-10-30 09:46:23.031871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:44.650 [2024-10-30 09:46:23.032513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.650 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.650 "name": "raid_bdev1", 00:11:44.650 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:44.651 "strip_size_kb": 0, 00:11:44.651 "state": "online", 00:11:44.651 "raid_level": "raid1", 00:11:44.651 "superblock": false, 00:11:44.651 "num_base_bdevs": 4, 00:11:44.651 "num_base_bdevs_discovered": 4, 00:11:44.651 "num_base_bdevs_operational": 4, 00:11:44.651 "process": { 00:11:44.651 "type": "rebuild", 00:11:44.651 "target": "spare", 00:11:44.651 "progress": { 00:11:44.651 "blocks": 10240, 00:11:44.651 "percent": 15 00:11:44.651 } 00:11:44.651 }, 00:11:44.651 "base_bdevs_list": [ 00:11:44.651 { 00:11:44.651 "name": "spare", 00:11:44.651 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:44.651 "is_configured": true, 00:11:44.651 "data_offset": 0, 00:11:44.651 "data_size": 65536 00:11:44.651 }, 00:11:44.651 { 00:11:44.651 "name": "BaseBdev2", 00:11:44.651 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:44.651 "is_configured": true, 00:11:44.651 "data_offset": 0, 00:11:44.651 "data_size": 65536 00:11:44.651 }, 00:11:44.651 { 00:11:44.651 "name": "BaseBdev3", 00:11:44.651 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:44.651 "is_configured": true, 00:11:44.651 "data_offset": 0, 00:11:44.651 "data_size": 65536 00:11:44.651 }, 00:11:44.651 { 00:11:44.651 "name": "BaseBdev4", 00:11:44.651 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:44.651 "is_configured": true, 00:11:44.651 "data_offset": 0, 00:11:44.651 "data_size": 65536 00:11:44.651 } 00:11:44.651 ] 00:11:44.651 }' 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.651 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.651 [2024-10-30 09:46:23.253173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.910 [2024-10-30 09:46:23.395384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:44.910 [2024-10-30 09:46:23.406208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.910 [2024-10-30 09:46:23.406263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.910 [2024-10-30 09:46:23.406274] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:44.910 [2024-10-30 09:46:23.425214] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.910 "name": "raid_bdev1", 00:11:44.910 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:44.910 "strip_size_kb": 0, 00:11:44.910 "state": "online", 00:11:44.910 "raid_level": "raid1", 00:11:44.910 "superblock": false, 00:11:44.910 "num_base_bdevs": 4, 00:11:44.910 "num_base_bdevs_discovered": 3, 00:11:44.910 "num_base_bdevs_operational": 3, 00:11:44.910 "base_bdevs_list": [ 00:11:44.910 { 00:11:44.910 "name": null, 00:11:44.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.910 "is_configured": false, 00:11:44.910 "data_offset": 0, 00:11:44.910 "data_size": 65536 00:11:44.910 }, 00:11:44.910 { 00:11:44.910 "name": "BaseBdev2", 00:11:44.910 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:44.910 "is_configured": true, 00:11:44.910 "data_offset": 0, 00:11:44.910 "data_size": 65536 00:11:44.910 }, 00:11:44.910 { 00:11:44.910 "name": "BaseBdev3", 00:11:44.910 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:44.910 "is_configured": true, 00:11:44.910 "data_offset": 0, 00:11:44.910 "data_size": 65536 00:11:44.910 }, 00:11:44.910 { 00:11:44.910 "name": "BaseBdev4", 00:11:44.910 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:44.910 "is_configured": true, 00:11:44.910 "data_offset": 0, 00:11:44.910 "data_size": 65536 00:11:44.910 } 00:11:44.910 ] 00:11:44.910 }' 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.910 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.170 "name": "raid_bdev1", 00:11:45.170 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:45.170 "strip_size_kb": 0, 00:11:45.170 "state": "online", 00:11:45.170 "raid_level": "raid1", 00:11:45.170 "superblock": false, 00:11:45.170 "num_base_bdevs": 4, 00:11:45.170 "num_base_bdevs_discovered": 3, 00:11:45.170 "num_base_bdevs_operational": 3, 00:11:45.170 "base_bdevs_list": [ 00:11:45.170 { 00:11:45.170 "name": null, 00:11:45.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.170 "is_configured": false, 00:11:45.170 "data_offset": 0, 00:11:45.170 "data_size": 65536 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "name": "BaseBdev2", 00:11:45.170 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:45.170 "is_configured": true, 00:11:45.170 "data_offset": 0, 00:11:45.170 "data_size": 65536 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "name": "BaseBdev3", 00:11:45.170 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:45.170 "is_configured": true, 00:11:45.170 "data_offset": 0, 00:11:45.170 "data_size": 65536 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "name": "BaseBdev4", 00:11:45.170 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:45.170 "is_configured": true, 00:11:45.170 "data_offset": 0, 00:11:45.170 "data_size": 65536 00:11:45.170 } 00:11:45.170 ] 00:11:45.170 }' 00:11:45.170 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.432 [2024-10-30 09:46:23.841551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.432 09:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:45.432 143.00 IOPS, 429.00 MiB/s [2024-10-30T09:46:24.052Z] [2024-10-30 09:46:23.889400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:45.432 [2024-10-30 09:46:23.891360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.432 [2024-10-30 09:46:24.001435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:45.432 [2024-10-30 09:46:24.001871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:45.691 [2024-10-30 09:46:24.121460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:45.691 [2024-10-30 09:46:24.121695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:45.952 [2024-10-30 09:46:24.469558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:46.213 [2024-10-30 09:46:24.602604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:46.213 [2024-10-30 09:46:24.603245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.475 133.67 IOPS, 401.00 MiB/s [2024-10-30T09:46:25.095Z] 09:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.475 "name": "raid_bdev1", 00:11:46.475 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:46.475 "strip_size_kb": 0, 00:11:46.475 "state": "online", 00:11:46.475 "raid_level": "raid1", 00:11:46.475 "superblock": false, 00:11:46.475 "num_base_bdevs": 4, 00:11:46.475 "num_base_bdevs_discovered": 4, 00:11:46.475 "num_base_bdevs_operational": 4, 00:11:46.475 "process": { 00:11:46.475 "type": "rebuild", 00:11:46.475 "target": "spare", 00:11:46.475 "progress": { 00:11:46.475 "blocks": 12288, 00:11:46.475 "percent": 18 00:11:46.475 } 00:11:46.475 }, 00:11:46.475 "base_bdevs_list": [ 00:11:46.475 { 00:11:46.475 "name": "spare", 00:11:46.475 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:46.475 "is_configured": true, 00:11:46.475 "data_offset": 0, 00:11:46.475 "data_size": 65536 00:11:46.475 }, 00:11:46.475 { 00:11:46.475 "name": "BaseBdev2", 00:11:46.475 "uuid": "f32df323-594b-561b-a3f6-ebbad2dae41c", 00:11:46.475 "is_configured": true, 00:11:46.475 "data_offset": 0, 00:11:46.475 "data_size": 65536 00:11:46.475 }, 00:11:46.475 { 00:11:46.475 "name": "BaseBdev3", 00:11:46.475 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:46.475 "is_configured": true, 00:11:46.475 "data_offset": 0, 00:11:46.475 "data_size": 65536 00:11:46.475 }, 00:11:46.475 { 00:11:46.475 "name": "BaseBdev4", 00:11:46.475 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:46.475 "is_configured": true, 00:11:46.475 "data_offset": 0, 00:11:46.475 "data_size": 65536 00:11:46.475 } 00:11:46.475 ] 00:11:46.475 }' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.475 09:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.475 [2024-10-30 09:46:24.980518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.736 [2024-10-30 09:46:25.181655] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:46.736 [2024-10-30 09:46:25.181852] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.736 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.736 "name": "raid_bdev1", 00:11:46.736 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:46.737 "strip_size_kb": 0, 00:11:46.737 "state": "online", 00:11:46.737 "raid_level": "raid1", 00:11:46.737 "superblock": false, 00:11:46.737 "num_base_bdevs": 4, 00:11:46.737 "num_base_bdevs_discovered": 3, 00:11:46.737 "num_base_bdevs_operational": 3, 00:11:46.737 "process": { 00:11:46.737 "type": "rebuild", 00:11:46.737 "target": "spare", 00:11:46.737 "progress": { 00:11:46.737 "blocks": 16384, 00:11:46.737 "percent": 25 00:11:46.737 } 00:11:46.737 }, 00:11:46.737 "base_bdevs_list": [ 00:11:46.737 { 00:11:46.737 "name": "spare", 00:11:46.737 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": null, 00:11:46.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.737 "is_configured": false, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": "BaseBdev3", 00:11:46.737 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": "BaseBdev4", 00:11:46.737 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 } 00:11:46.737 ] 00:11:46.737 }' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.737 "name": "raid_bdev1", 00:11:46.737 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:46.737 "strip_size_kb": 0, 00:11:46.737 "state": "online", 00:11:46.737 "raid_level": "raid1", 00:11:46.737 "superblock": false, 00:11:46.737 "num_base_bdevs": 4, 00:11:46.737 "num_base_bdevs_discovered": 3, 00:11:46.737 "num_base_bdevs_operational": 3, 00:11:46.737 "process": { 00:11:46.737 "type": "rebuild", 00:11:46.737 "target": "spare", 00:11:46.737 "progress": { 00:11:46.737 "blocks": 16384, 00:11:46.737 "percent": 25 00:11:46.737 } 00:11:46.737 }, 00:11:46.737 "base_bdevs_list": [ 00:11:46.737 { 00:11:46.737 "name": "spare", 00:11:46.737 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": null, 00:11:46.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.737 "is_configured": false, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": "BaseBdev3", 00:11:46.737 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 }, 00:11:46.737 { 00:11:46.737 "name": "BaseBdev4", 00:11:46.737 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:46.737 "is_configured": true, 00:11:46.737 "data_offset": 0, 00:11:46.737 "data_size": 65536 00:11:46.737 } 00:11:46.737 ] 00:11:46.737 }' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.737 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.997 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.997 09:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:46.997 [2024-10-30 09:46:25.435125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:46.997 [2024-10-30 09:46:25.435533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:47.258 [2024-10-30 09:46:25.638263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:47.520 [2024-10-30 09:46:25.889247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:47.520 118.25 IOPS, 354.75 MiB/s [2024-10-30T09:46:26.140Z] [2024-10-30 09:46:26.099874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.781 09:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.043 "name": "raid_bdev1", 00:11:48.043 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:48.043 "strip_size_kb": 0, 00:11:48.043 "state": "online", 00:11:48.043 "raid_level": "raid1", 00:11:48.043 "superblock": false, 00:11:48.043 "num_base_bdevs": 4, 00:11:48.043 "num_base_bdevs_discovered": 3, 00:11:48.043 "num_base_bdevs_operational": 3, 00:11:48.043 "process": { 00:11:48.043 "type": "rebuild", 00:11:48.043 "target": "spare", 00:11:48.043 "progress": { 00:11:48.043 "blocks": 32768, 00:11:48.043 "percent": 50 00:11:48.043 } 00:11:48.043 }, 00:11:48.043 "base_bdevs_list": [ 00:11:48.043 { 00:11:48.043 "name": "spare", 00:11:48.043 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:48.043 "is_configured": true, 00:11:48.043 "data_offset": 0, 00:11:48.043 "data_size": 65536 00:11:48.043 }, 00:11:48.043 { 00:11:48.043 "name": null, 00:11:48.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.043 "is_configured": false, 00:11:48.043 "data_offset": 0, 00:11:48.043 "data_size": 65536 00:11:48.043 }, 00:11:48.043 { 00:11:48.043 "name": "BaseBdev3", 00:11:48.043 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:48.043 "is_configured": true, 00:11:48.043 "data_offset": 0, 00:11:48.043 "data_size": 65536 00:11:48.043 }, 00:11:48.043 { 00:11:48.043 "name": "BaseBdev4", 00:11:48.043 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:48.043 "is_configured": true, 00:11:48.043 "data_offset": 0, 00:11:48.043 "data_size": 65536 00:11:48.043 } 00:11:48.043 ] 00:11:48.043 }' 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.043 09:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:48.307 [2024-10-30 09:46:26.885792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:48.880 105.60 IOPS, 316.80 MiB/s [2024-10-30T09:46:27.500Z] [2024-10-30 09:46:27.228433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 09:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.141 "name": "raid_bdev1", 00:11:49.141 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:49.141 "strip_size_kb": 0, 00:11:49.141 "state": "online", 00:11:49.141 "raid_level": "raid1", 00:11:49.141 "superblock": false, 00:11:49.141 "num_base_bdevs": 4, 00:11:49.141 "num_base_bdevs_discovered": 3, 00:11:49.141 "num_base_bdevs_operational": 3, 00:11:49.141 "process": { 00:11:49.141 "type": "rebuild", 00:11:49.141 "target": "spare", 00:11:49.141 "progress": { 00:11:49.141 "blocks": 51200, 00:11:49.141 "percent": 78 00:11:49.141 } 00:11:49.141 }, 00:11:49.141 "base_bdevs_list": [ 00:11:49.141 { 00:11:49.141 "name": "spare", 00:11:49.141 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:49.141 "is_configured": true, 00:11:49.141 "data_offset": 0, 00:11:49.141 "data_size": 65536 00:11:49.141 }, 00:11:49.141 { 00:11:49.141 "name": null, 00:11:49.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.141 "is_configured": false, 00:11:49.141 "data_offset": 0, 00:11:49.141 "data_size": 65536 00:11:49.141 }, 00:11:49.141 { 00:11:49.141 "name": "BaseBdev3", 00:11:49.141 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:49.141 "is_configured": true, 00:11:49.141 "data_offset": 0, 00:11:49.141 "data_size": 65536 00:11:49.141 }, 00:11:49.141 { 00:11:49.141 "name": "BaseBdev4", 00:11:49.141 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:49.141 "is_configured": true, 00:11:49.141 "data_offset": 0, 00:11:49.141 "data_size": 65536 00:11:49.141 } 00:11:49.141 ] 00:11:49.141 }' 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.141 [2024-10-30 09:46:27.568253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.141 09:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:49.665 95.00 IOPS, 285.00 MiB/s [2024-10-30T09:46:28.285Z] [2024-10-30 09:46:28.229409] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:49.926 [2024-10-30 09:46:28.329423] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:49.926 [2024-10-30 09:46:28.338805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.189 "name": "raid_bdev1", 00:11:50.189 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:50.189 "strip_size_kb": 0, 00:11:50.189 "state": "online", 00:11:50.189 "raid_level": "raid1", 00:11:50.189 "superblock": false, 00:11:50.189 "num_base_bdevs": 4, 00:11:50.189 "num_base_bdevs_discovered": 3, 00:11:50.189 "num_base_bdevs_operational": 3, 00:11:50.189 "base_bdevs_list": [ 00:11:50.189 { 00:11:50.189 "name": "spare", 00:11:50.189 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": null, 00:11:50.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.189 "is_configured": false, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": "BaseBdev3", 00:11:50.189 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": "BaseBdev4", 00:11:50.189 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 } 00:11:50.189 ] 00:11:50.189 }' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.189 "name": "raid_bdev1", 00:11:50.189 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:50.189 "strip_size_kb": 0, 00:11:50.189 "state": "online", 00:11:50.189 "raid_level": "raid1", 00:11:50.189 "superblock": false, 00:11:50.189 "num_base_bdevs": 4, 00:11:50.189 "num_base_bdevs_discovered": 3, 00:11:50.189 "num_base_bdevs_operational": 3, 00:11:50.189 "base_bdevs_list": [ 00:11:50.189 { 00:11:50.189 "name": "spare", 00:11:50.189 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": null, 00:11:50.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.189 "is_configured": false, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": "BaseBdev3", 00:11:50.189 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 }, 00:11:50.189 { 00:11:50.189 "name": "BaseBdev4", 00:11:50.189 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:50.189 "is_configured": true, 00:11:50.189 "data_offset": 0, 00:11:50.189 "data_size": 65536 00:11:50.189 } 00:11:50.189 ] 00:11:50.189 }' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.189 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.451 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.451 "name": "raid_bdev1", 00:11:50.451 "uuid": "14bca0f6-09cb-4561-bce2-704f687f1d9d", 00:11:50.451 "strip_size_kb": 0, 00:11:50.451 "state": "online", 00:11:50.451 "raid_level": "raid1", 00:11:50.451 "superblock": false, 00:11:50.451 "num_base_bdevs": 4, 00:11:50.451 "num_base_bdevs_discovered": 3, 00:11:50.451 "num_base_bdevs_operational": 3, 00:11:50.451 "base_bdevs_list": [ 00:11:50.451 { 00:11:50.451 "name": "spare", 00:11:50.451 "uuid": "9a24b7e3-1e52-5bf3-81b6-2ce263b517a5", 00:11:50.451 "is_configured": true, 00:11:50.451 "data_offset": 0, 00:11:50.451 "data_size": 65536 00:11:50.451 }, 00:11:50.451 { 00:11:50.451 "name": null, 00:11:50.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.451 "is_configured": false, 00:11:50.451 "data_offset": 0, 00:11:50.451 "data_size": 65536 00:11:50.451 }, 00:11:50.451 { 00:11:50.451 "name": "BaseBdev3", 00:11:50.451 "uuid": "80dc6248-702c-5c44-a438-27e2c3572262", 00:11:50.451 "is_configured": true, 00:11:50.451 "data_offset": 0, 00:11:50.451 "data_size": 65536 00:11:50.451 }, 00:11:50.451 { 00:11:50.451 "name": "BaseBdev4", 00:11:50.451 "uuid": "6d0e86de-5568-5e10-b0bf-b04faf1f1524", 00:11:50.451 "is_configured": true, 00:11:50.451 "data_offset": 0, 00:11:50.451 "data_size": 65536 00:11:50.451 } 00:11:50.451 ] 00:11:50.451 }' 00:11:50.451 09:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.451 09:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.713 86.29 IOPS, 258.86 MiB/s [2024-10-30T09:46:29.333Z] 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.713 [2024-10-30 09:46:29.107552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.713 [2024-10-30 09:46:29.107579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.713 00:11:50.713 Latency(us) 00:11:50.713 [2024-10-30T09:46:29.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.713 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:50.713 raid_bdev1 : 7.34 83.50 250.51 0.00 0.00 17012.75 330.83 118569.75 00:11:50.713 [2024-10-30T09:46:29.333Z] =================================================================================================================== 00:11:50.713 [2024-10-30T09:46:29.333Z] Total : 83.50 250.51 0.00 0.00 17012.75 330.83 118569.75 00:11:50.713 [2024-10-30 09:46:29.214743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.713 [2024-10-30 09:46:29.214892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.713 [2024-10-30 09:46:29.215020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.713 [2024-10-30 09:46:29.215121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:50.713 { 00:11:50.713 "results": [ 00:11:50.713 { 00:11:50.713 "job": "raid_bdev1", 00:11:50.713 "core_mask": "0x1", 00:11:50.713 "workload": "randrw", 00:11:50.713 "percentage": 50, 00:11:50.713 "status": "finished", 00:11:50.713 "queue_depth": 2, 00:11:50.713 "io_size": 3145728, 00:11:50.713 "runtime": 7.340923, 00:11:50.713 "iops": 83.50448574382268, 00:11:50.713 "mibps": 250.51345723146807, 00:11:50.713 "io_failed": 0, 00:11:50.713 "io_timeout": 0, 00:11:50.713 "avg_latency_us": 17012.745508846783, 00:11:50.713 "min_latency_us": 330.83076923076925, 00:11:50.713 "max_latency_us": 118569.74769230769 00:11:50.713 } 00:11:50.713 ], 00:11:50.713 "core_count": 1 00:11:50.713 } 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.713 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:50.973 /dev/nbd0 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.973 1+0 records in 00:11:50.973 1+0 records out 00:11:50.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293136 s, 14.0 MB/s 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.973 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:11:51.231 /dev/nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.231 1+0 records in 00:11:51.231 1+0 records out 00:11:51.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241396 s, 17.0 MB/s 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.231 09:46:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:51.532 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:11:51.798 /dev/nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.798 1+0 records in 00:11:51.798 1+0 records out 00:11:51.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307315 s, 13.3 MB/s 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.798 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.056 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76651 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76651 ']' 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76651 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76651 00:11:52.318 killing process with pid 76651 00:11:52.318 Received shutdown signal, test time was about 8.951561 seconds 00:11:52.318 00:11:52.318 Latency(us) 00:11:52.318 [2024-10-30T09:46:30.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.318 [2024-10-30T09:46:30.938Z] =================================================================================================================== 00:11:52.318 [2024-10-30T09:46:30.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76651' 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76651 00:11:52.318 09:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76651 00:11:52.318 [2024-10-30 09:46:30.810164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.580 [2024-10-30 09:46:31.068275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:53.523 00:11:53.523 real 0m11.574s 00:11:53.523 user 0m14.383s 00:11:53.523 sys 0m1.261s 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 ************************************ 00:11:53.523 END TEST raid_rebuild_test_io 00:11:53.523 ************************************ 00:11:53.523 09:46:31 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:11:53.523 09:46:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:53.523 09:46:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.523 09:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 ************************************ 00:11:53.523 START TEST raid_rebuild_test_sb_io 00:11:53.523 ************************************ 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77049 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77049 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77049 ']' 00:11:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 09:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:53.523 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:53.523 Zero copy mechanism will not be used. 00:11:53.523 [2024-10-30 09:46:31.935334] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:11:53.523 [2024-10-30 09:46:31.935457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77049 ] 00:11:53.523 [2024-10-30 09:46:32.093987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.785 [2024-10-30 09:46:32.196015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.785 [2024-10-30 09:46:32.332322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.785 [2024-10-30 09:46:32.332368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 BaseBdev1_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 [2024-10-30 09:46:32.818369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:54.358 [2024-10-30 09:46:32.818469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.358 [2024-10-30 09:46:32.818603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:54.358 [2024-10-30 09:46:32.818697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.358 [2024-10-30 09:46:32.820879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.358 [2024-10-30 09:46:32.821030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.358 BaseBdev1 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 BaseBdev2_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 [2024-10-30 09:46:32.854563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:54.358 [2024-10-30 09:46:32.854713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.358 [2024-10-30 09:46:32.854753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:54.358 [2024-10-30 09:46:32.854817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.358 [2024-10-30 09:46:32.856921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.358 [2024-10-30 09:46:32.857029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.358 BaseBdev2 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 BaseBdev3_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 [2024-10-30 09:46:32.908263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:54.358 [2024-10-30 09:46:32.908415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.358 [2024-10-30 09:46:32.908456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:54.358 [2024-10-30 09:46:32.908596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.358 [2024-10-30 09:46:32.910727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.358 [2024-10-30 09:46:32.910838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:54.358 BaseBdev3 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:54.358 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.359 BaseBdev4_malloc 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.359 [2024-10-30 09:46:32.948403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:54.359 [2024-10-30 09:46:32.948539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.359 [2024-10-30 09:46:32.948564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:54.359 [2024-10-30 09:46:32.948576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.359 [2024-10-30 09:46:32.950666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.359 [2024-10-30 09:46:32.950704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:54.359 BaseBdev4 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.359 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.621 spare_malloc 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.621 spare_delay 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.621 [2024-10-30 09:46:32.996662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.621 [2024-10-30 09:46:32.996805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.621 [2024-10-30 09:46:32.996843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:54.621 [2024-10-30 09:46:32.997018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.621 [2024-10-30 09:46:32.999226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.621 [2024-10-30 09:46:32.999260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.621 spare 00:11:54.621 09:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.621 [2024-10-30 09:46:33.004717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.621 [2024-10-30 09:46:33.006630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.621 [2024-10-30 09:46:33.006777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.621 [2024-10-30 09:46:33.006851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.621 [2024-10-30 09:46:33.007175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:54.621 [2024-10-30 09:46:33.007198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.621 [2024-10-30 09:46:33.007451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:54.621 [2024-10-30 09:46:33.007606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:54.621 [2024-10-30 09:46:33.007614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:54.621 [2024-10-30 09:46:33.007752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.621 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.622 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.622 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.622 "name": "raid_bdev1", 00:11:54.622 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:54.622 "strip_size_kb": 0, 00:11:54.622 "state": "online", 00:11:54.622 "raid_level": "raid1", 00:11:54.622 "superblock": true, 00:11:54.622 "num_base_bdevs": 4, 00:11:54.622 "num_base_bdevs_discovered": 4, 00:11:54.622 "num_base_bdevs_operational": 4, 00:11:54.622 "base_bdevs_list": [ 00:11:54.622 { 00:11:54.622 "name": "BaseBdev1", 00:11:54.622 "uuid": "e4f5d074-6151-5d3a-b16c-f48a2ed3df16", 00:11:54.622 "is_configured": true, 00:11:54.622 "data_offset": 2048, 00:11:54.622 "data_size": 63488 00:11:54.622 }, 00:11:54.622 { 00:11:54.622 "name": "BaseBdev2", 00:11:54.622 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:54.622 "is_configured": true, 00:11:54.622 "data_offset": 2048, 00:11:54.622 "data_size": 63488 00:11:54.622 }, 00:11:54.622 { 00:11:54.622 "name": "BaseBdev3", 00:11:54.622 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:54.622 "is_configured": true, 00:11:54.622 "data_offset": 2048, 00:11:54.622 "data_size": 63488 00:11:54.622 }, 00:11:54.622 { 00:11:54.622 "name": "BaseBdev4", 00:11:54.622 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:54.622 "is_configured": true, 00:11:54.622 "data_offset": 2048, 00:11:54.622 "data_size": 63488 00:11:54.622 } 00:11:54.622 ] 00:11:54.622 }' 00:11:54.622 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.622 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 [2024-10-30 09:46:33.329161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:54.883 [2024-10-30 09:46:33.392774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.883 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.883 "name": "raid_bdev1", 00:11:54.883 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:54.883 "strip_size_kb": 0, 00:11:54.883 "state": "online", 00:11:54.883 "raid_level": "raid1", 00:11:54.883 "superblock": true, 00:11:54.883 "num_base_bdevs": 4, 00:11:54.883 "num_base_bdevs_discovered": 3, 00:11:54.883 "num_base_bdevs_operational": 3, 00:11:54.883 "base_bdevs_list": [ 00:11:54.883 { 00:11:54.883 "name": null, 00:11:54.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.883 "is_configured": false, 00:11:54.883 "data_offset": 0, 00:11:54.883 "data_size": 63488 00:11:54.883 }, 00:11:54.883 { 00:11:54.883 "name": "BaseBdev2", 00:11:54.883 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:54.883 "is_configured": true, 00:11:54.883 "data_offset": 2048, 00:11:54.883 "data_size": 63488 00:11:54.883 }, 00:11:54.884 { 00:11:54.884 "name": "BaseBdev3", 00:11:54.884 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:54.884 "is_configured": true, 00:11:54.884 "data_offset": 2048, 00:11:54.884 "data_size": 63488 00:11:54.884 }, 00:11:54.884 { 00:11:54.884 "name": "BaseBdev4", 00:11:54.884 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:54.884 "is_configured": true, 00:11:54.884 "data_offset": 2048, 00:11:54.884 "data_size": 63488 00:11:54.884 } 00:11:54.884 ] 00:11:54.884 }' 00:11:54.884 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.884 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.884 [2024-10-30 09:46:33.482204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:54.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:54.884 Zero copy mechanism will not be used. 00:11:54.884 Running I/O for 60 seconds... 00:11:55.143 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.143 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.143 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.143 [2024-10-30 09:46:33.701793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.143 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.143 09:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:55.402 [2024-10-30 09:46:33.770859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:55.402 [2024-10-30 09:46:33.772993] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.402 [2024-10-30 09:46:33.883110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:55.402 [2024-10-30 09:46:33.884224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:55.662 [2024-10-30 09:46:34.102230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:55.662 [2024-10-30 09:46:34.102624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:55.923 [2024-10-30 09:46:34.450503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:56.183 116.00 IOPS, 348.00 MiB/s [2024-10-30T09:46:34.803Z] [2024-10-30 09:46:34.686735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.183 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.183 "name": "raid_bdev1", 00:11:56.183 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:56.183 "strip_size_kb": 0, 00:11:56.183 "state": "online", 00:11:56.183 "raid_level": "raid1", 00:11:56.183 "superblock": true, 00:11:56.183 "num_base_bdevs": 4, 00:11:56.183 "num_base_bdevs_discovered": 4, 00:11:56.183 "num_base_bdevs_operational": 4, 00:11:56.183 "process": { 00:11:56.183 "type": "rebuild", 00:11:56.183 "target": "spare", 00:11:56.183 "progress": { 00:11:56.183 "blocks": 10240, 00:11:56.183 "percent": 16 00:11:56.183 } 00:11:56.183 }, 00:11:56.183 "base_bdevs_list": [ 00:11:56.183 { 00:11:56.183 "name": "spare", 00:11:56.183 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:11:56.183 "is_configured": true, 00:11:56.183 "data_offset": 2048, 00:11:56.183 "data_size": 63488 00:11:56.183 }, 00:11:56.183 { 00:11:56.183 "name": "BaseBdev2", 00:11:56.183 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:56.183 "is_configured": true, 00:11:56.183 "data_offset": 2048, 00:11:56.183 "data_size": 63488 00:11:56.183 }, 00:11:56.183 { 00:11:56.183 "name": "BaseBdev3", 00:11:56.183 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:56.183 "is_configured": true, 00:11:56.183 "data_offset": 2048, 00:11:56.183 "data_size": 63488 00:11:56.183 }, 00:11:56.183 { 00:11:56.184 "name": "BaseBdev4", 00:11:56.184 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:56.184 "is_configured": true, 00:11:56.184 "data_offset": 2048, 00:11:56.184 "data_size": 63488 00:11:56.184 } 00:11:56.184 ] 00:11:56.184 }' 00:11:56.184 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.445 [2024-10-30 09:46:34.839499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.445 [2024-10-30 09:46:34.916832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:56.445 [2024-10-30 09:46:34.935498] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:56.445 [2024-10-30 09:46:34.945346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.445 [2024-10-30 09:46:34.945384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.445 [2024-10-30 09:46:34.945395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:56.445 [2024-10-30 09:46:34.962895] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.445 09:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.445 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.445 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.445 "name": "raid_bdev1", 00:11:56.445 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:56.445 "strip_size_kb": 0, 00:11:56.445 "state": "online", 00:11:56.445 "raid_level": "raid1", 00:11:56.445 "superblock": true, 00:11:56.445 "num_base_bdevs": 4, 00:11:56.445 "num_base_bdevs_discovered": 3, 00:11:56.445 "num_base_bdevs_operational": 3, 00:11:56.445 "base_bdevs_list": [ 00:11:56.445 { 00:11:56.445 "name": null, 00:11:56.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.445 "is_configured": false, 00:11:56.445 "data_offset": 0, 00:11:56.445 "data_size": 63488 00:11:56.445 }, 00:11:56.445 { 00:11:56.445 "name": "BaseBdev2", 00:11:56.445 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:56.445 "is_configured": true, 00:11:56.445 "data_offset": 2048, 00:11:56.445 "data_size": 63488 00:11:56.445 }, 00:11:56.445 { 00:11:56.445 "name": "BaseBdev3", 00:11:56.445 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:56.445 "is_configured": true, 00:11:56.445 "data_offset": 2048, 00:11:56.445 "data_size": 63488 00:11:56.445 }, 00:11:56.445 { 00:11:56.445 "name": "BaseBdev4", 00:11:56.445 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:56.445 "is_configured": true, 00:11:56.445 "data_offset": 2048, 00:11:56.445 "data_size": 63488 00:11:56.445 } 00:11:56.445 ] 00:11:56.445 }' 00:11:56.445 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.445 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.707 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.967 "name": "raid_bdev1", 00:11:56.967 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:56.967 "strip_size_kb": 0, 00:11:56.967 "state": "online", 00:11:56.967 "raid_level": "raid1", 00:11:56.967 "superblock": true, 00:11:56.967 "num_base_bdevs": 4, 00:11:56.967 "num_base_bdevs_discovered": 3, 00:11:56.967 "num_base_bdevs_operational": 3, 00:11:56.967 "base_bdevs_list": [ 00:11:56.967 { 00:11:56.967 "name": null, 00:11:56.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.967 "is_configured": false, 00:11:56.967 "data_offset": 0, 00:11:56.967 "data_size": 63488 00:11:56.967 }, 00:11:56.967 { 00:11:56.967 "name": "BaseBdev2", 00:11:56.967 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:56.967 "is_configured": true, 00:11:56.967 "data_offset": 2048, 00:11:56.967 "data_size": 63488 00:11:56.967 }, 00:11:56.967 { 00:11:56.967 "name": "BaseBdev3", 00:11:56.967 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:56.967 "is_configured": true, 00:11:56.967 "data_offset": 2048, 00:11:56.967 "data_size": 63488 00:11:56.967 }, 00:11:56.967 { 00:11:56.967 "name": "BaseBdev4", 00:11:56.967 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:56.967 "is_configured": true, 00:11:56.967 "data_offset": 2048, 00:11:56.967 "data_size": 63488 00:11:56.967 } 00:11:56.967 ] 00:11:56.967 }' 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.967 [2024-10-30 09:46:35.407768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.967 09:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:56.967 [2024-10-30 09:46:35.492179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:56.967 122.50 IOPS, 367.50 MiB/s [2024-10-30T09:46:35.587Z] [2024-10-30 09:46:35.494174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:57.225 [2024-10-30 09:46:35.632985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:57.486 [2024-10-30 09:46:35.852492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:57.486 [2024-10-30 09:46:35.853242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:57.743 [2024-10-30 09:46:36.317189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.001 116.67 IOPS, 350.00 MiB/s [2024-10-30T09:46:36.621Z] 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.001 "name": "raid_bdev1", 00:11:58.001 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:58.001 "strip_size_kb": 0, 00:11:58.001 "state": "online", 00:11:58.001 "raid_level": "raid1", 00:11:58.001 "superblock": true, 00:11:58.001 "num_base_bdevs": 4, 00:11:58.001 "num_base_bdevs_discovered": 4, 00:11:58.001 "num_base_bdevs_operational": 4, 00:11:58.001 "process": { 00:11:58.001 "type": "rebuild", 00:11:58.001 "target": "spare", 00:11:58.001 "progress": { 00:11:58.001 "blocks": 10240, 00:11:58.001 "percent": 16 00:11:58.001 } 00:11:58.001 }, 00:11:58.001 "base_bdevs_list": [ 00:11:58.001 { 00:11:58.001 "name": "spare", 00:11:58.001 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:11:58.001 "is_configured": true, 00:11:58.001 "data_offset": 2048, 00:11:58.001 "data_size": 63488 00:11:58.001 }, 00:11:58.001 { 00:11:58.001 "name": "BaseBdev2", 00:11:58.001 "uuid": "236000e0-af79-547a-9442-8b676b5cacbb", 00:11:58.001 "is_configured": true, 00:11:58.001 "data_offset": 2048, 00:11:58.001 "data_size": 63488 00:11:58.001 }, 00:11:58.001 { 00:11:58.001 "name": "BaseBdev3", 00:11:58.001 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:58.001 "is_configured": true, 00:11:58.001 "data_offset": 2048, 00:11:58.001 "data_size": 63488 00:11:58.001 }, 00:11:58.001 { 00:11:58.001 "name": "BaseBdev4", 00:11:58.001 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:58.001 "is_configured": true, 00:11:58.001 "data_offset": 2048, 00:11:58.001 "data_size": 63488 00:11:58.001 } 00:11:58.001 ] 00:11:58.001 }' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:58.001 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.001 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.001 [2024-10-30 09:46:36.564078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.257 [2024-10-30 09:46:36.659029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:58.257 [2024-10-30 09:46:36.660177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:58.257 [2024-10-30 09:46:36.869216] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:58.257 [2024-10-30 09:46:36.869250] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:58.513 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.514 "name": "raid_bdev1", 00:11:58.514 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:58.514 "strip_size_kb": 0, 00:11:58.514 "state": "online", 00:11:58.514 "raid_level": "raid1", 00:11:58.514 "superblock": true, 00:11:58.514 "num_base_bdevs": 4, 00:11:58.514 "num_base_bdevs_discovered": 3, 00:11:58.514 "num_base_bdevs_operational": 3, 00:11:58.514 "process": { 00:11:58.514 "type": "rebuild", 00:11:58.514 "target": "spare", 00:11:58.514 "progress": { 00:11:58.514 "blocks": 14336, 00:11:58.514 "percent": 22 00:11:58.514 } 00:11:58.514 }, 00:11:58.514 "base_bdevs_list": [ 00:11:58.514 { 00:11:58.514 "name": "spare", 00:11:58.514 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": null, 00:11:58.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.514 "is_configured": false, 00:11:58.514 "data_offset": 0, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": "BaseBdev3", 00:11:58.514 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": "BaseBdev4", 00:11:58.514 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 } 00:11:58.514 ] 00:11:58.514 }' 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.514 09:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.514 "name": "raid_bdev1", 00:11:58.514 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:58.514 "strip_size_kb": 0, 00:11:58.514 "state": "online", 00:11:58.514 "raid_level": "raid1", 00:11:58.514 "superblock": true, 00:11:58.514 "num_base_bdevs": 4, 00:11:58.514 "num_base_bdevs_discovered": 3, 00:11:58.514 "num_base_bdevs_operational": 3, 00:11:58.514 "process": { 00:11:58.514 "type": "rebuild", 00:11:58.514 "target": "spare", 00:11:58.514 "progress": { 00:11:58.514 "blocks": 16384, 00:11:58.514 "percent": 25 00:11:58.514 } 00:11:58.514 }, 00:11:58.514 "base_bdevs_list": [ 00:11:58.514 { 00:11:58.514 "name": "spare", 00:11:58.514 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": null, 00:11:58.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.514 "is_configured": false, 00:11:58.514 "data_offset": 0, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": "BaseBdev3", 00:11:58.514 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 }, 00:11:58.514 { 00:11:58.514 "name": "BaseBdev4", 00:11:58.514 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:58.514 "is_configured": true, 00:11:58.514 "data_offset": 2048, 00:11:58.514 "data_size": 63488 00:11:58.514 } 00:11:58.514 ] 00:11:58.514 }' 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.514 09:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.077 104.25 IOPS, 312.75 MiB/s [2024-10-30T09:46:37.697Z] [2024-10-30 09:46:37.658133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:59.338 [2024-10-30 09:46:37.897761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.596 [2024-10-30 09:46:38.114105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.596 "name": "raid_bdev1", 00:11:59.596 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:11:59.596 "strip_size_kb": 0, 00:11:59.596 "state": "online", 00:11:59.596 "raid_level": "raid1", 00:11:59.596 "superblock": true, 00:11:59.596 "num_base_bdevs": 4, 00:11:59.596 "num_base_bdevs_discovered": 3, 00:11:59.596 "num_base_bdevs_operational": 3, 00:11:59.596 "process": { 00:11:59.596 "type": "rebuild", 00:11:59.596 "target": "spare", 00:11:59.596 "progress": { 00:11:59.596 "blocks": 32768, 00:11:59.596 "percent": 51 00:11:59.596 } 00:11:59.596 }, 00:11:59.596 "base_bdevs_list": [ 00:11:59.596 { 00:11:59.596 "name": "spare", 00:11:59.596 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:11:59.596 "is_configured": true, 00:11:59.596 "data_offset": 2048, 00:11:59.596 "data_size": 63488 00:11:59.596 }, 00:11:59.596 { 00:11:59.596 "name": null, 00:11:59.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.596 "is_configured": false, 00:11:59.596 "data_offset": 0, 00:11:59.596 "data_size": 63488 00:11:59.596 }, 00:11:59.596 { 00:11:59.596 "name": "BaseBdev3", 00:11:59.596 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:11:59.596 "is_configured": true, 00:11:59.596 "data_offset": 2048, 00:11:59.596 "data_size": 63488 00:11:59.596 }, 00:11:59.596 { 00:11:59.596 "name": "BaseBdev4", 00:11:59.596 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:11:59.596 "is_configured": true, 00:11:59.596 "data_offset": 2048, 00:11:59.596 "data_size": 63488 00:11:59.596 } 00:11:59.596 ] 00:11:59.596 }' 00:11:59.596 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.597 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.597 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.597 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.597 09:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.856 [2024-10-30 09:46:38.356470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:00.113 95.00 IOPS, 285.00 MiB/s [2024-10-30T09:46:38.733Z] [2024-10-30 09:46:38.593907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:00.678 [2024-10-30 09:46:39.150164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:00.678 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.678 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.679 "name": "raid_bdev1", 00:12:00.679 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:00.679 "strip_size_kb": 0, 00:12:00.679 "state": "online", 00:12:00.679 "raid_level": "raid1", 00:12:00.679 "superblock": true, 00:12:00.679 "num_base_bdevs": 4, 00:12:00.679 "num_base_bdevs_discovered": 3, 00:12:00.679 "num_base_bdevs_operational": 3, 00:12:00.679 "process": { 00:12:00.679 "type": "rebuild", 00:12:00.679 "target": "spare", 00:12:00.679 "progress": { 00:12:00.679 "blocks": 51200, 00:12:00.679 "percent": 80 00:12:00.679 } 00:12:00.679 }, 00:12:00.679 "base_bdevs_list": [ 00:12:00.679 { 00:12:00.679 "name": "spare", 00:12:00.679 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:00.679 "is_configured": true, 00:12:00.679 "data_offset": 2048, 00:12:00.679 "data_size": 63488 00:12:00.679 }, 00:12:00.679 { 00:12:00.679 "name": null, 00:12:00.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.679 "is_configured": false, 00:12:00.679 "data_offset": 0, 00:12:00.679 "data_size": 63488 00:12:00.679 }, 00:12:00.679 { 00:12:00.679 "name": "BaseBdev3", 00:12:00.679 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:00.679 "is_configured": true, 00:12:00.679 "data_offset": 2048, 00:12:00.679 "data_size": 63488 00:12:00.679 }, 00:12:00.679 { 00:12:00.679 "name": "BaseBdev4", 00:12:00.679 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:00.679 "is_configured": true, 00:12:00.679 "data_offset": 2048, 00:12:00.679 "data_size": 63488 00:12:00.679 } 00:12:00.679 ] 00:12:00.679 }' 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.679 09:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:01.541 86.00 IOPS, 258.00 MiB/s [2024-10-30T09:46:40.161Z] [2024-10-30 09:46:39.879716] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:01.541 [2024-10-30 09:46:39.984591] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:01.541 [2024-10-30 09:46:39.986534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.799 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.799 "name": "raid_bdev1", 00:12:01.799 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:01.799 "strip_size_kb": 0, 00:12:01.799 "state": "online", 00:12:01.799 "raid_level": "raid1", 00:12:01.799 "superblock": true, 00:12:01.799 "num_base_bdevs": 4, 00:12:01.799 "num_base_bdevs_discovered": 3, 00:12:01.799 "num_base_bdevs_operational": 3, 00:12:01.799 "base_bdevs_list": [ 00:12:01.799 { 00:12:01.799 "name": "spare", 00:12:01.799 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:01.799 "is_configured": true, 00:12:01.799 "data_offset": 2048, 00:12:01.799 "data_size": 63488 00:12:01.799 }, 00:12:01.799 { 00:12:01.799 "name": null, 00:12:01.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.799 "is_configured": false, 00:12:01.799 "data_offset": 0, 00:12:01.799 "data_size": 63488 00:12:01.799 }, 00:12:01.799 { 00:12:01.799 "name": "BaseBdev3", 00:12:01.799 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:01.799 "is_configured": true, 00:12:01.799 "data_offset": 2048, 00:12:01.799 "data_size": 63488 00:12:01.799 }, 00:12:01.799 { 00:12:01.800 "name": "BaseBdev4", 00:12:01.800 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:01.800 "is_configured": true, 00:12:01.800 "data_offset": 2048, 00:12:01.800 "data_size": 63488 00:12:01.800 } 00:12:01.800 ] 00:12:01.800 }' 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.800 "name": "raid_bdev1", 00:12:01.800 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:01.800 "strip_size_kb": 0, 00:12:01.800 "state": "online", 00:12:01.800 "raid_level": "raid1", 00:12:01.800 "superblock": true, 00:12:01.800 "num_base_bdevs": 4, 00:12:01.800 "num_base_bdevs_discovered": 3, 00:12:01.800 "num_base_bdevs_operational": 3, 00:12:01.800 "base_bdevs_list": [ 00:12:01.800 { 00:12:01.800 "name": "spare", 00:12:01.800 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:01.800 "is_configured": true, 00:12:01.800 "data_offset": 2048, 00:12:01.800 "data_size": 63488 00:12:01.800 }, 00:12:01.800 { 00:12:01.800 "name": null, 00:12:01.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.800 "is_configured": false, 00:12:01.800 "data_offset": 0, 00:12:01.800 "data_size": 63488 00:12:01.800 }, 00:12:01.800 { 00:12:01.800 "name": "BaseBdev3", 00:12:01.800 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:01.800 "is_configured": true, 00:12:01.800 "data_offset": 2048, 00:12:01.800 "data_size": 63488 00:12:01.800 }, 00:12:01.800 { 00:12:01.800 "name": "BaseBdev4", 00:12:01.800 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:01.800 "is_configured": true, 00:12:01.800 "data_offset": 2048, 00:12:01.800 "data_size": 63488 00:12:01.800 } 00:12:01.800 ] 00:12:01.800 }' 00:12:01.800 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.058 78.00 IOPS, 234.00 MiB/s [2024-10-30T09:46:40.678Z] 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.058 "name": "raid_bdev1", 00:12:02.058 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:02.058 "strip_size_kb": 0, 00:12:02.058 "state": "online", 00:12:02.058 "raid_level": "raid1", 00:12:02.058 "superblock": true, 00:12:02.058 "num_base_bdevs": 4, 00:12:02.058 "num_base_bdevs_discovered": 3, 00:12:02.058 "num_base_bdevs_operational": 3, 00:12:02.058 "base_bdevs_list": [ 00:12:02.058 { 00:12:02.058 "name": "spare", 00:12:02.058 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:02.058 "is_configured": true, 00:12:02.058 "data_offset": 2048, 00:12:02.058 "data_size": 63488 00:12:02.058 }, 00:12:02.058 { 00:12:02.058 "name": null, 00:12:02.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.058 "is_configured": false, 00:12:02.058 "data_offset": 0, 00:12:02.058 "data_size": 63488 00:12:02.058 }, 00:12:02.058 { 00:12:02.058 "name": "BaseBdev3", 00:12:02.058 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:02.058 "is_configured": true, 00:12:02.058 "data_offset": 2048, 00:12:02.058 "data_size": 63488 00:12:02.058 }, 00:12:02.058 { 00:12:02.058 "name": "BaseBdev4", 00:12:02.058 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:02.058 "is_configured": true, 00:12:02.058 "data_offset": 2048, 00:12:02.058 "data_size": 63488 00:12:02.058 } 00:12:02.058 ] 00:12:02.058 }' 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.058 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.316 [2024-10-30 09:46:40.778495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.316 [2024-10-30 09:46:40.778521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.316 00:12:02.316 Latency(us) 00:12:02.316 [2024-10-30T09:46:40.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.316 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:02.316 raid_bdev1 : 7.36 75.66 226.97 0.00 0.00 18253.13 259.94 116149.96 00:12:02.316 [2024-10-30T09:46:40.936Z] =================================================================================================================== 00:12:02.316 [2024-10-30T09:46:40.936Z] Total : 75.66 226.97 0.00 0.00 18253.13 259.94 116149.96 00:12:02.316 [2024-10-30 09:46:40.858401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.316 [2024-10-30 09:46:40.858439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.316 [2024-10-30 09:46:40.858528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.316 [2024-10-30 09:46:40.858537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:02.316 { 00:12:02.316 "results": [ 00:12:02.316 { 00:12:02.316 "job": "raid_bdev1", 00:12:02.316 "core_mask": "0x1", 00:12:02.316 "workload": "randrw", 00:12:02.316 "percentage": 50, 00:12:02.316 "status": "finished", 00:12:02.316 "queue_depth": 2, 00:12:02.316 "io_size": 3145728, 00:12:02.316 "runtime": 7.3622, 00:12:02.316 "iops": 75.65673304175382, 00:12:02.316 "mibps": 226.97019912526147, 00:12:02.316 "io_failed": 0, 00:12:02.316 "io_timeout": 0, 00:12:02.316 "avg_latency_us": 18253.127380196107, 00:12:02.316 "min_latency_us": 259.9384615384615, 00:12:02.316 "max_latency_us": 116149.95692307693 00:12:02.316 } 00:12:02.316 ], 00:12:02.316 "core_count": 1 00:12:02.316 } 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.316 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.317 09:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:02.575 /dev/nbd0 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.575 1+0 records in 00:12:02.575 1+0 records out 00:12:02.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314042 s, 13.0 MB/s 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.575 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:02.833 /dev/nbd1 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:02.833 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.833 1+0 records in 00:12:02.833 1+0 records out 00:12:02.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298661 s, 13.7 MB/s 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.834 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.093 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:03.352 /dev/nbd1 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.352 1+0 records in 00:12:03.352 1+0 records out 00:12:03.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343925 s, 11.9 MB/s 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.352 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.611 09:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.611 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:03.868 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.868 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.869 [2024-10-30 09:46:42.413936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.869 [2024-10-30 09:46:42.413978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.869 [2024-10-30 09:46:42.413995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:03.869 [2024-10-30 09:46:42.414002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.869 [2024-10-30 09:46:42.415830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.869 [2024-10-30 09:46:42.415862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.869 [2024-10-30 09:46:42.415935] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:03.869 [2024-10-30 09:46:42.415978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.869 [2024-10-30 09:46:42.416094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.869 [2024-10-30 09:46:42.416170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.869 spare 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.869 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.127 [2024-10-30 09:46:42.516250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:04.127 [2024-10-30 09:46:42.516277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.127 [2024-10-30 09:46:42.516544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:12:04.127 [2024-10-30 09:46:42.516699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:04.127 [2024-10-30 09:46:42.516716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:04.127 [2024-10-30 09:46:42.516858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.127 "name": "raid_bdev1", 00:12:04.127 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:04.127 "strip_size_kb": 0, 00:12:04.127 "state": "online", 00:12:04.127 "raid_level": "raid1", 00:12:04.127 "superblock": true, 00:12:04.127 "num_base_bdevs": 4, 00:12:04.127 "num_base_bdevs_discovered": 3, 00:12:04.127 "num_base_bdevs_operational": 3, 00:12:04.127 "base_bdevs_list": [ 00:12:04.127 { 00:12:04.127 "name": "spare", 00:12:04.127 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:04.127 "is_configured": true, 00:12:04.127 "data_offset": 2048, 00:12:04.127 "data_size": 63488 00:12:04.127 }, 00:12:04.127 { 00:12:04.127 "name": null, 00:12:04.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.127 "is_configured": false, 00:12:04.127 "data_offset": 2048, 00:12:04.127 "data_size": 63488 00:12:04.127 }, 00:12:04.127 { 00:12:04.127 "name": "BaseBdev3", 00:12:04.127 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:04.127 "is_configured": true, 00:12:04.127 "data_offset": 2048, 00:12:04.127 "data_size": 63488 00:12:04.127 }, 00:12:04.127 { 00:12:04.127 "name": "BaseBdev4", 00:12:04.127 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:04.127 "is_configured": true, 00:12:04.127 "data_offset": 2048, 00:12:04.127 "data_size": 63488 00:12:04.127 } 00:12:04.127 ] 00:12:04.127 }' 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.127 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.386 "name": "raid_bdev1", 00:12:04.386 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:04.386 "strip_size_kb": 0, 00:12:04.386 "state": "online", 00:12:04.386 "raid_level": "raid1", 00:12:04.386 "superblock": true, 00:12:04.386 "num_base_bdevs": 4, 00:12:04.386 "num_base_bdevs_discovered": 3, 00:12:04.386 "num_base_bdevs_operational": 3, 00:12:04.386 "base_bdevs_list": [ 00:12:04.386 { 00:12:04.386 "name": "spare", 00:12:04.386 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:04.386 "is_configured": true, 00:12:04.386 "data_offset": 2048, 00:12:04.386 "data_size": 63488 00:12:04.386 }, 00:12:04.386 { 00:12:04.386 "name": null, 00:12:04.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.386 "is_configured": false, 00:12:04.386 "data_offset": 2048, 00:12:04.386 "data_size": 63488 00:12:04.386 }, 00:12:04.386 { 00:12:04.386 "name": "BaseBdev3", 00:12:04.386 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:04.386 "is_configured": true, 00:12:04.386 "data_offset": 2048, 00:12:04.386 "data_size": 63488 00:12:04.386 }, 00:12:04.386 { 00:12:04.386 "name": "BaseBdev4", 00:12:04.386 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:04.386 "is_configured": true, 00:12:04.386 "data_offset": 2048, 00:12:04.386 "data_size": 63488 00:12:04.386 } 00:12:04.386 ] 00:12:04.386 }' 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.386 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.387 [2024-10-30 09:46:42.970138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.387 09:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.387 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.387 "name": "raid_bdev1", 00:12:04.387 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:04.387 "strip_size_kb": 0, 00:12:04.387 "state": "online", 00:12:04.387 "raid_level": "raid1", 00:12:04.387 "superblock": true, 00:12:04.387 "num_base_bdevs": 4, 00:12:04.387 "num_base_bdevs_discovered": 2, 00:12:04.387 "num_base_bdevs_operational": 2, 00:12:04.387 "base_bdevs_list": [ 00:12:04.387 { 00:12:04.387 "name": null, 00:12:04.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.387 "is_configured": false, 00:12:04.387 "data_offset": 0, 00:12:04.387 "data_size": 63488 00:12:04.387 }, 00:12:04.387 { 00:12:04.387 "name": null, 00:12:04.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.387 "is_configured": false, 00:12:04.387 "data_offset": 2048, 00:12:04.387 "data_size": 63488 00:12:04.387 }, 00:12:04.387 { 00:12:04.387 "name": "BaseBdev3", 00:12:04.387 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:04.387 "is_configured": true, 00:12:04.387 "data_offset": 2048, 00:12:04.387 "data_size": 63488 00:12:04.387 }, 00:12:04.387 { 00:12:04.387 "name": "BaseBdev4", 00:12:04.387 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:04.387 "is_configured": true, 00:12:04.387 "data_offset": 2048, 00:12:04.387 "data_size": 63488 00:12:04.387 } 00:12:04.387 ] 00:12:04.387 }' 00:12:04.387 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.387 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.953 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.953 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.953 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.953 [2024-10-30 09:46:43.270232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.953 [2024-10-30 09:46:43.270377] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:04.953 [2024-10-30 09:46:43.270387] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:04.953 [2024-10-30 09:46:43.270415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.953 [2024-10-30 09:46:43.277960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:12:04.953 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.953 09:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:04.953 [2024-10-30 09:46:43.279492] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.887 "name": "raid_bdev1", 00:12:05.887 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:05.887 "strip_size_kb": 0, 00:12:05.887 "state": "online", 00:12:05.887 "raid_level": "raid1", 00:12:05.887 "superblock": true, 00:12:05.887 "num_base_bdevs": 4, 00:12:05.887 "num_base_bdevs_discovered": 3, 00:12:05.887 "num_base_bdevs_operational": 3, 00:12:05.887 "process": { 00:12:05.887 "type": "rebuild", 00:12:05.887 "target": "spare", 00:12:05.887 "progress": { 00:12:05.887 "blocks": 20480, 00:12:05.887 "percent": 32 00:12:05.887 } 00:12:05.887 }, 00:12:05.887 "base_bdevs_list": [ 00:12:05.887 { 00:12:05.887 "name": "spare", 00:12:05.887 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:05.887 "is_configured": true, 00:12:05.887 "data_offset": 2048, 00:12:05.887 "data_size": 63488 00:12:05.887 }, 00:12:05.887 { 00:12:05.887 "name": null, 00:12:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.887 "is_configured": false, 00:12:05.887 "data_offset": 2048, 00:12:05.887 "data_size": 63488 00:12:05.887 }, 00:12:05.887 { 00:12:05.887 "name": "BaseBdev3", 00:12:05.887 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:05.887 "is_configured": true, 00:12:05.887 "data_offset": 2048, 00:12:05.887 "data_size": 63488 00:12:05.887 }, 00:12:05.887 { 00:12:05.887 "name": "BaseBdev4", 00:12:05.887 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:05.887 "is_configured": true, 00:12:05.887 "data_offset": 2048, 00:12:05.887 "data_size": 63488 00:12:05.887 } 00:12:05.887 ] 00:12:05.887 }' 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.887 [2024-10-30 09:46:44.393829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.887 [2024-10-30 09:46:44.485036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.887 [2024-10-30 09:46:44.485112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.887 [2024-10-30 09:46:44.485128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.887 [2024-10-30 09:46:44.485134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.887 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.145 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.145 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.145 "name": "raid_bdev1", 00:12:06.145 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:06.145 "strip_size_kb": 0, 00:12:06.145 "state": "online", 00:12:06.145 "raid_level": "raid1", 00:12:06.145 "superblock": true, 00:12:06.145 "num_base_bdevs": 4, 00:12:06.145 "num_base_bdevs_discovered": 2, 00:12:06.145 "num_base_bdevs_operational": 2, 00:12:06.145 "base_bdevs_list": [ 00:12:06.145 { 00:12:06.145 "name": null, 00:12:06.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.145 "is_configured": false, 00:12:06.145 "data_offset": 0, 00:12:06.145 "data_size": 63488 00:12:06.145 }, 00:12:06.145 { 00:12:06.145 "name": null, 00:12:06.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.145 "is_configured": false, 00:12:06.145 "data_offset": 2048, 00:12:06.145 "data_size": 63488 00:12:06.145 }, 00:12:06.145 { 00:12:06.145 "name": "BaseBdev3", 00:12:06.145 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:06.145 "is_configured": true, 00:12:06.145 "data_offset": 2048, 00:12:06.145 "data_size": 63488 00:12:06.145 }, 00:12:06.145 { 00:12:06.145 "name": "BaseBdev4", 00:12:06.145 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:06.145 "is_configured": true, 00:12:06.145 "data_offset": 2048, 00:12:06.145 "data_size": 63488 00:12:06.146 } 00:12:06.146 ] 00:12:06.146 }' 00:12:06.146 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.146 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.404 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.404 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.404 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.404 [2024-10-30 09:46:44.814180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.404 [2024-10-30 09:46:44.814230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.404 [2024-10-30 09:46:44.814250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:06.404 [2024-10-30 09:46:44.814258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.404 [2024-10-30 09:46:44.814640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.404 [2024-10-30 09:46:44.814657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.404 [2024-10-30 09:46:44.814738] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:06.404 [2024-10-30 09:46:44.814747] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:06.404 [2024-10-30 09:46:44.814757] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:06.404 [2024-10-30 09:46:44.814772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.404 [2024-10-30 09:46:44.823011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:12:06.404 spare 00:12:06.404 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.404 09:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:06.404 [2024-10-30 09:46:44.824580] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.339 "name": "raid_bdev1", 00:12:07.339 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:07.339 "strip_size_kb": 0, 00:12:07.339 "state": "online", 00:12:07.339 "raid_level": "raid1", 00:12:07.339 "superblock": true, 00:12:07.339 "num_base_bdevs": 4, 00:12:07.339 "num_base_bdevs_discovered": 3, 00:12:07.339 "num_base_bdevs_operational": 3, 00:12:07.339 "process": { 00:12:07.339 "type": "rebuild", 00:12:07.339 "target": "spare", 00:12:07.339 "progress": { 00:12:07.339 "blocks": 20480, 00:12:07.339 "percent": 32 00:12:07.339 } 00:12:07.339 }, 00:12:07.339 "base_bdevs_list": [ 00:12:07.339 { 00:12:07.339 "name": "spare", 00:12:07.339 "uuid": "5394723e-8c60-5a0d-98f8-d94199e8f268", 00:12:07.339 "is_configured": true, 00:12:07.339 "data_offset": 2048, 00:12:07.339 "data_size": 63488 00:12:07.339 }, 00:12:07.339 { 00:12:07.339 "name": null, 00:12:07.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.339 "is_configured": false, 00:12:07.339 "data_offset": 2048, 00:12:07.339 "data_size": 63488 00:12:07.339 }, 00:12:07.339 { 00:12:07.339 "name": "BaseBdev3", 00:12:07.339 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:07.339 "is_configured": true, 00:12:07.339 "data_offset": 2048, 00:12:07.339 "data_size": 63488 00:12:07.339 }, 00:12:07.339 { 00:12:07.339 "name": "BaseBdev4", 00:12:07.339 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:07.339 "is_configured": true, 00:12:07.339 "data_offset": 2048, 00:12:07.339 "data_size": 63488 00:12:07.339 } 00:12:07.339 ] 00:12:07.339 }' 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.339 [2024-10-30 09:46:45.918973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.339 [2024-10-30 09:46:45.929616] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:07.339 [2024-10-30 09:46:45.929662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.339 [2024-10-30 09:46:45.929675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.339 [2024-10-30 09:46:45.929682] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.339 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.597 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.597 "name": "raid_bdev1", 00:12:07.597 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:07.598 "strip_size_kb": 0, 00:12:07.598 "state": "online", 00:12:07.598 "raid_level": "raid1", 00:12:07.598 "superblock": true, 00:12:07.598 "num_base_bdevs": 4, 00:12:07.598 "num_base_bdevs_discovered": 2, 00:12:07.598 "num_base_bdevs_operational": 2, 00:12:07.598 "base_bdevs_list": [ 00:12:07.598 { 00:12:07.598 "name": null, 00:12:07.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.598 "is_configured": false, 00:12:07.598 "data_offset": 0, 00:12:07.598 "data_size": 63488 00:12:07.598 }, 00:12:07.598 { 00:12:07.598 "name": null, 00:12:07.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.598 "is_configured": false, 00:12:07.598 "data_offset": 2048, 00:12:07.598 "data_size": 63488 00:12:07.598 }, 00:12:07.598 { 00:12:07.598 "name": "BaseBdev3", 00:12:07.598 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:07.598 "is_configured": true, 00:12:07.598 "data_offset": 2048, 00:12:07.598 "data_size": 63488 00:12:07.598 }, 00:12:07.598 { 00:12:07.598 "name": "BaseBdev4", 00:12:07.598 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:07.598 "is_configured": true, 00:12:07.598 "data_offset": 2048, 00:12:07.598 "data_size": 63488 00:12:07.598 } 00:12:07.598 ] 00:12:07.598 }' 00:12:07.598 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.598 09:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.856 "name": "raid_bdev1", 00:12:07.856 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:07.856 "strip_size_kb": 0, 00:12:07.856 "state": "online", 00:12:07.856 "raid_level": "raid1", 00:12:07.856 "superblock": true, 00:12:07.856 "num_base_bdevs": 4, 00:12:07.856 "num_base_bdevs_discovered": 2, 00:12:07.856 "num_base_bdevs_operational": 2, 00:12:07.856 "base_bdevs_list": [ 00:12:07.856 { 00:12:07.856 "name": null, 00:12:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.856 "is_configured": false, 00:12:07.856 "data_offset": 0, 00:12:07.856 "data_size": 63488 00:12:07.856 }, 00:12:07.856 { 00:12:07.856 "name": null, 00:12:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.856 "is_configured": false, 00:12:07.856 "data_offset": 2048, 00:12:07.856 "data_size": 63488 00:12:07.856 }, 00:12:07.856 { 00:12:07.856 "name": "BaseBdev3", 00:12:07.856 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:07.856 "is_configured": true, 00:12:07.856 "data_offset": 2048, 00:12:07.856 "data_size": 63488 00:12:07.856 }, 00:12:07.856 { 00:12:07.856 "name": "BaseBdev4", 00:12:07.856 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:07.856 "is_configured": true, 00:12:07.856 "data_offset": 2048, 00:12:07.856 "data_size": 63488 00:12:07.856 } 00:12:07.856 ] 00:12:07.856 }' 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.856 [2024-10-30 09:46:46.366779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:07.856 [2024-10-30 09:46:46.366827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.856 [2024-10-30 09:46:46.366843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:12:07.856 [2024-10-30 09:46:46.366856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.856 [2024-10-30 09:46:46.367200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.856 [2024-10-30 09:46:46.367213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.856 [2024-10-30 09:46:46.367272] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:07.856 [2024-10-30 09:46:46.367286] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:07.856 [2024-10-30 09:46:46.367292] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:07.856 [2024-10-30 09:46:46.367302] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:07.856 BaseBdev1 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.856 09:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.790 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.048 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.048 "name": "raid_bdev1", 00:12:09.048 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:09.048 "strip_size_kb": 0, 00:12:09.048 "state": "online", 00:12:09.048 "raid_level": "raid1", 00:12:09.048 "superblock": true, 00:12:09.048 "num_base_bdevs": 4, 00:12:09.048 "num_base_bdevs_discovered": 2, 00:12:09.048 "num_base_bdevs_operational": 2, 00:12:09.048 "base_bdevs_list": [ 00:12:09.048 { 00:12:09.048 "name": null, 00:12:09.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.048 "is_configured": false, 00:12:09.048 "data_offset": 0, 00:12:09.048 "data_size": 63488 00:12:09.048 }, 00:12:09.048 { 00:12:09.048 "name": null, 00:12:09.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.048 "is_configured": false, 00:12:09.048 "data_offset": 2048, 00:12:09.048 "data_size": 63488 00:12:09.048 }, 00:12:09.048 { 00:12:09.048 "name": "BaseBdev3", 00:12:09.048 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:09.048 "is_configured": true, 00:12:09.048 "data_offset": 2048, 00:12:09.048 "data_size": 63488 00:12:09.048 }, 00:12:09.048 { 00:12:09.048 "name": "BaseBdev4", 00:12:09.048 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:09.048 "is_configured": true, 00:12:09.048 "data_offset": 2048, 00:12:09.048 "data_size": 63488 00:12:09.048 } 00:12:09.048 ] 00:12:09.048 }' 00:12:09.048 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.048 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.306 "name": "raid_bdev1", 00:12:09.306 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:09.306 "strip_size_kb": 0, 00:12:09.306 "state": "online", 00:12:09.306 "raid_level": "raid1", 00:12:09.306 "superblock": true, 00:12:09.306 "num_base_bdevs": 4, 00:12:09.306 "num_base_bdevs_discovered": 2, 00:12:09.306 "num_base_bdevs_operational": 2, 00:12:09.306 "base_bdevs_list": [ 00:12:09.306 { 00:12:09.306 "name": null, 00:12:09.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.306 "is_configured": false, 00:12:09.306 "data_offset": 0, 00:12:09.306 "data_size": 63488 00:12:09.306 }, 00:12:09.306 { 00:12:09.306 "name": null, 00:12:09.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.306 "is_configured": false, 00:12:09.306 "data_offset": 2048, 00:12:09.306 "data_size": 63488 00:12:09.306 }, 00:12:09.306 { 00:12:09.306 "name": "BaseBdev3", 00:12:09.306 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:09.306 "is_configured": true, 00:12:09.306 "data_offset": 2048, 00:12:09.306 "data_size": 63488 00:12:09.306 }, 00:12:09.306 { 00:12:09.306 "name": "BaseBdev4", 00:12:09.306 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:09.306 "is_configured": true, 00:12:09.306 "data_offset": 2048, 00:12:09.306 "data_size": 63488 00:12:09.306 } 00:12:09.306 ] 00:12:09.306 }' 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.306 [2024-10-30 09:46:47.827248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.306 [2024-10-30 09:46:47.827394] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:09.306 [2024-10-30 09:46:47.827411] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:09.306 request: 00:12:09.306 { 00:12:09.306 "base_bdev": "BaseBdev1", 00:12:09.306 "raid_bdev": "raid_bdev1", 00:12:09.306 "method": "bdev_raid_add_base_bdev", 00:12:09.306 "req_id": 1 00:12:09.306 } 00:12:09.306 Got JSON-RPC error response 00:12:09.306 response: 00:12:09.306 { 00:12:09.306 "code": -22, 00:12:09.306 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:09.306 } 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.306 09:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.292 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.292 "name": "raid_bdev1", 00:12:10.292 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:10.292 "strip_size_kb": 0, 00:12:10.292 "state": "online", 00:12:10.292 "raid_level": "raid1", 00:12:10.292 "superblock": true, 00:12:10.292 "num_base_bdevs": 4, 00:12:10.292 "num_base_bdevs_discovered": 2, 00:12:10.292 "num_base_bdevs_operational": 2, 00:12:10.292 "base_bdevs_list": [ 00:12:10.292 { 00:12:10.293 "name": null, 00:12:10.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.293 "is_configured": false, 00:12:10.293 "data_offset": 0, 00:12:10.293 "data_size": 63488 00:12:10.293 }, 00:12:10.293 { 00:12:10.293 "name": null, 00:12:10.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.293 "is_configured": false, 00:12:10.293 "data_offset": 2048, 00:12:10.293 "data_size": 63488 00:12:10.293 }, 00:12:10.293 { 00:12:10.293 "name": "BaseBdev3", 00:12:10.293 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:10.293 "is_configured": true, 00:12:10.293 "data_offset": 2048, 00:12:10.293 "data_size": 63488 00:12:10.293 }, 00:12:10.293 { 00:12:10.293 "name": "BaseBdev4", 00:12:10.293 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:10.293 "is_configured": true, 00:12:10.293 "data_offset": 2048, 00:12:10.293 "data_size": 63488 00:12:10.293 } 00:12:10.293 ] 00:12:10.293 }' 00:12:10.293 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.293 09:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.861 "name": "raid_bdev1", 00:12:10.861 "uuid": "f1de5ed6-73ff-489b-98c6-33914aab6397", 00:12:10.861 "strip_size_kb": 0, 00:12:10.861 "state": "online", 00:12:10.861 "raid_level": "raid1", 00:12:10.861 "superblock": true, 00:12:10.861 "num_base_bdevs": 4, 00:12:10.861 "num_base_bdevs_discovered": 2, 00:12:10.861 "num_base_bdevs_operational": 2, 00:12:10.861 "base_bdevs_list": [ 00:12:10.861 { 00:12:10.861 "name": null, 00:12:10.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.861 "is_configured": false, 00:12:10.861 "data_offset": 0, 00:12:10.861 "data_size": 63488 00:12:10.861 }, 00:12:10.861 { 00:12:10.861 "name": null, 00:12:10.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.861 "is_configured": false, 00:12:10.861 "data_offset": 2048, 00:12:10.861 "data_size": 63488 00:12:10.861 }, 00:12:10.861 { 00:12:10.861 "name": "BaseBdev3", 00:12:10.861 "uuid": "4a4ab97d-48fa-5395-825a-160954bbe7f8", 00:12:10.861 "is_configured": true, 00:12:10.861 "data_offset": 2048, 00:12:10.861 "data_size": 63488 00:12:10.861 }, 00:12:10.861 { 00:12:10.861 "name": "BaseBdev4", 00:12:10.861 "uuid": "07e7ad0c-f2a0-52d8-be9c-e4a973af4305", 00:12:10.861 "is_configured": true, 00:12:10.861 "data_offset": 2048, 00:12:10.861 "data_size": 63488 00:12:10.861 } 00:12:10.861 ] 00:12:10.861 }' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77049 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77049 ']' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77049 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77049 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.861 killing process with pid 77049 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77049' 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77049 00:12:10.861 Received shutdown signal, test time was about 15.809413 seconds 00:12:10.861 00:12:10.861 Latency(us) 00:12:10.861 [2024-10-30T09:46:49.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.861 [2024-10-30T09:46:49.481Z] =================================================================================================================== 00:12:10.861 [2024-10-30T09:46:49.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:10.861 09:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77049 00:12:10.861 [2024-10-30 09:46:49.293710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.861 [2024-10-30 09:46:49.293812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.861 [2024-10-30 09:46:49.293875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.861 [2024-10-30 09:46:49.293886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:11.119 [2024-10-30 09:46:49.500577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.684 ************************************ 00:12:11.684 END TEST raid_rebuild_test_sb_io 00:12:11.684 ************************************ 00:12:11.684 09:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:11.684 00:12:11.684 real 0m18.235s 00:12:11.684 user 0m23.202s 00:12:11.684 sys 0m1.752s 00:12:11.684 09:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:11.684 09:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.684 09:46:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:11.684 09:46:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:11.684 09:46:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:11.684 09:46:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.684 09:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.684 ************************************ 00:12:11.684 START TEST raid5f_state_function_test 00:12:11.684 ************************************ 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:11.684 Process raid pid: 77744 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77744 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77744' 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77744 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 77744 ']' 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.684 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.685 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.685 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.685 09:46:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.685 [2024-10-30 09:46:50.237158] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:12:11.685 [2024-10-30 09:46:50.237277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.945 [2024-10-30 09:46:50.398830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.945 [2024-10-30 09:46:50.503836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.205 [2024-10-30 09:46:50.645524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.205 [2024-10-30 09:46:50.645568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.466 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:12.466 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:12.466 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.466 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.466 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 [2024-10-30 09:46:51.090988] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.728 [2024-10-30 09:46:51.091032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.728 [2024-10-30 09:46:51.091042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.728 [2024-10-30 09:46:51.091052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.728 [2024-10-30 09:46:51.091069] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.728 [2024-10-30 09:46:51.091078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.728 "name": "Existed_Raid", 00:12:12.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.728 "strip_size_kb": 64, 00:12:12.728 "state": "configuring", 00:12:12.728 "raid_level": "raid5f", 00:12:12.728 "superblock": false, 00:12:12.728 "num_base_bdevs": 3, 00:12:12.728 "num_base_bdevs_discovered": 0, 00:12:12.728 "num_base_bdevs_operational": 3, 00:12:12.728 "base_bdevs_list": [ 00:12:12.728 { 00:12:12.728 "name": "BaseBdev1", 00:12:12.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.728 "is_configured": false, 00:12:12.728 "data_offset": 0, 00:12:12.728 "data_size": 0 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "name": "BaseBdev2", 00:12:12.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.728 "is_configured": false, 00:12:12.728 "data_offset": 0, 00:12:12.728 "data_size": 0 00:12:12.728 }, 00:12:12.728 { 00:12:12.728 "name": "BaseBdev3", 00:12:12.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.728 "is_configured": false, 00:12:12.728 "data_offset": 0, 00:12:12.728 "data_size": 0 00:12:12.728 } 00:12:12.728 ] 00:12:12.728 }' 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.728 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 [2024-10-30 09:46:51.399013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.013 [2024-10-30 09:46:51.399049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 [2024-10-30 09:46:51.407020] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.013 [2024-10-30 09:46:51.407054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.013 [2024-10-30 09:46:51.407071] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.013 [2024-10-30 09:46:51.407080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.013 [2024-10-30 09:46:51.407087] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.013 [2024-10-30 09:46:51.407095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 [2024-10-30 09:46:51.439425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.013 BaseBdev1 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.013 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.013 [ 00:12:13.013 { 00:12:13.013 "name": "BaseBdev1", 00:12:13.013 "aliases": [ 00:12:13.013 "cd368365-9c54-41a0-ba89-0896e66befbc" 00:12:13.013 ], 00:12:13.013 "product_name": "Malloc disk", 00:12:13.013 "block_size": 512, 00:12:13.013 "num_blocks": 65536, 00:12:13.013 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:13.013 "assigned_rate_limits": { 00:12:13.013 "rw_ios_per_sec": 0, 00:12:13.013 "rw_mbytes_per_sec": 0, 00:12:13.013 "r_mbytes_per_sec": 0, 00:12:13.014 "w_mbytes_per_sec": 0 00:12:13.014 }, 00:12:13.014 "claimed": true, 00:12:13.014 "claim_type": "exclusive_write", 00:12:13.014 "zoned": false, 00:12:13.014 "supported_io_types": { 00:12:13.014 "read": true, 00:12:13.014 "write": true, 00:12:13.014 "unmap": true, 00:12:13.014 "flush": true, 00:12:13.014 "reset": true, 00:12:13.014 "nvme_admin": false, 00:12:13.014 "nvme_io": false, 00:12:13.014 "nvme_io_md": false, 00:12:13.014 "write_zeroes": true, 00:12:13.014 "zcopy": true, 00:12:13.014 "get_zone_info": false, 00:12:13.014 "zone_management": false, 00:12:13.014 "zone_append": false, 00:12:13.014 "compare": false, 00:12:13.014 "compare_and_write": false, 00:12:13.014 "abort": true, 00:12:13.014 "seek_hole": false, 00:12:13.014 "seek_data": false, 00:12:13.014 "copy": true, 00:12:13.014 "nvme_iov_md": false 00:12:13.014 }, 00:12:13.014 "memory_domains": [ 00:12:13.014 { 00:12:13.014 "dma_device_id": "system", 00:12:13.014 "dma_device_type": 1 00:12:13.014 }, 00:12:13.014 { 00:12:13.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.014 "dma_device_type": 2 00:12:13.014 } 00:12:13.014 ], 00:12:13.014 "driver_specific": {} 00:12:13.014 } 00:12:13.014 ] 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.014 "name": "Existed_Raid", 00:12:13.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.014 "strip_size_kb": 64, 00:12:13.014 "state": "configuring", 00:12:13.014 "raid_level": "raid5f", 00:12:13.014 "superblock": false, 00:12:13.014 "num_base_bdevs": 3, 00:12:13.014 "num_base_bdevs_discovered": 1, 00:12:13.014 "num_base_bdevs_operational": 3, 00:12:13.014 "base_bdevs_list": [ 00:12:13.014 { 00:12:13.014 "name": "BaseBdev1", 00:12:13.014 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:13.014 "is_configured": true, 00:12:13.014 "data_offset": 0, 00:12:13.014 "data_size": 65536 00:12:13.014 }, 00:12:13.014 { 00:12:13.014 "name": "BaseBdev2", 00:12:13.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.014 "is_configured": false, 00:12:13.014 "data_offset": 0, 00:12:13.014 "data_size": 0 00:12:13.014 }, 00:12:13.014 { 00:12:13.014 "name": "BaseBdev3", 00:12:13.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.014 "is_configured": false, 00:12:13.014 "data_offset": 0, 00:12:13.014 "data_size": 0 00:12:13.014 } 00:12:13.014 ] 00:12:13.014 }' 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.014 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.275 [2024-10-30 09:46:51.783535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.275 [2024-10-30 09:46:51.783581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.275 [2024-10-30 09:46:51.791590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.275 [2024-10-30 09:46:51.793426] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.275 [2024-10-30 09:46:51.793461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.275 [2024-10-30 09:46:51.793470] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.275 [2024-10-30 09:46:51.793478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.275 "name": "Existed_Raid", 00:12:13.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.275 "strip_size_kb": 64, 00:12:13.275 "state": "configuring", 00:12:13.275 "raid_level": "raid5f", 00:12:13.275 "superblock": false, 00:12:13.275 "num_base_bdevs": 3, 00:12:13.275 "num_base_bdevs_discovered": 1, 00:12:13.275 "num_base_bdevs_operational": 3, 00:12:13.275 "base_bdevs_list": [ 00:12:13.275 { 00:12:13.275 "name": "BaseBdev1", 00:12:13.275 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:13.275 "is_configured": true, 00:12:13.275 "data_offset": 0, 00:12:13.275 "data_size": 65536 00:12:13.275 }, 00:12:13.275 { 00:12:13.275 "name": "BaseBdev2", 00:12:13.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.275 "is_configured": false, 00:12:13.275 "data_offset": 0, 00:12:13.275 "data_size": 0 00:12:13.275 }, 00:12:13.275 { 00:12:13.275 "name": "BaseBdev3", 00:12:13.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.275 "is_configured": false, 00:12:13.275 "data_offset": 0, 00:12:13.275 "data_size": 0 00:12:13.275 } 00:12:13.275 ] 00:12:13.275 }' 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.275 09:46:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.537 [2024-10-30 09:46:52.130109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.537 BaseBdev2 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.537 [ 00:12:13.537 { 00:12:13.537 "name": "BaseBdev2", 00:12:13.537 "aliases": [ 00:12:13.537 "a736f06f-df92-4121-a8e3-ae866f373d4a" 00:12:13.537 ], 00:12:13.537 "product_name": "Malloc disk", 00:12:13.537 "block_size": 512, 00:12:13.537 "num_blocks": 65536, 00:12:13.537 "uuid": "a736f06f-df92-4121-a8e3-ae866f373d4a", 00:12:13.537 "assigned_rate_limits": { 00:12:13.537 "rw_ios_per_sec": 0, 00:12:13.537 "rw_mbytes_per_sec": 0, 00:12:13.537 "r_mbytes_per_sec": 0, 00:12:13.537 "w_mbytes_per_sec": 0 00:12:13.537 }, 00:12:13.537 "claimed": true, 00:12:13.537 "claim_type": "exclusive_write", 00:12:13.537 "zoned": false, 00:12:13.537 "supported_io_types": { 00:12:13.537 "read": true, 00:12:13.537 "write": true, 00:12:13.537 "unmap": true, 00:12:13.537 "flush": true, 00:12:13.537 "reset": true, 00:12:13.537 "nvme_admin": false, 00:12:13.537 "nvme_io": false, 00:12:13.537 "nvme_io_md": false, 00:12:13.537 "write_zeroes": true, 00:12:13.537 "zcopy": true, 00:12:13.537 "get_zone_info": false, 00:12:13.537 "zone_management": false, 00:12:13.537 "zone_append": false, 00:12:13.537 "compare": false, 00:12:13.537 "compare_and_write": false, 00:12:13.537 "abort": true, 00:12:13.537 "seek_hole": false, 00:12:13.537 "seek_data": false, 00:12:13.537 "copy": true, 00:12:13.537 "nvme_iov_md": false 00:12:13.537 }, 00:12:13.537 "memory_domains": [ 00:12:13.537 { 00:12:13.537 "dma_device_id": "system", 00:12:13.537 "dma_device_type": 1 00:12:13.537 }, 00:12:13.537 { 00:12:13.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.537 "dma_device_type": 2 00:12:13.537 } 00:12:13.537 ], 00:12:13.537 "driver_specific": {} 00:12:13.537 } 00:12:13.537 ] 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.537 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.796 "name": "Existed_Raid", 00:12:13.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.796 "strip_size_kb": 64, 00:12:13.796 "state": "configuring", 00:12:13.796 "raid_level": "raid5f", 00:12:13.796 "superblock": false, 00:12:13.796 "num_base_bdevs": 3, 00:12:13.796 "num_base_bdevs_discovered": 2, 00:12:13.796 "num_base_bdevs_operational": 3, 00:12:13.796 "base_bdevs_list": [ 00:12:13.796 { 00:12:13.796 "name": "BaseBdev1", 00:12:13.796 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:13.796 "is_configured": true, 00:12:13.796 "data_offset": 0, 00:12:13.796 "data_size": 65536 00:12:13.796 }, 00:12:13.796 { 00:12:13.796 "name": "BaseBdev2", 00:12:13.796 "uuid": "a736f06f-df92-4121-a8e3-ae866f373d4a", 00:12:13.796 "is_configured": true, 00:12:13.796 "data_offset": 0, 00:12:13.796 "data_size": 65536 00:12:13.796 }, 00:12:13.796 { 00:12:13.796 "name": "BaseBdev3", 00:12:13.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.796 "is_configured": false, 00:12:13.796 "data_offset": 0, 00:12:13.796 "data_size": 0 00:12:13.796 } 00:12:13.796 ] 00:12:13.796 }' 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.796 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.108 [2024-10-30 09:46:52.508559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.108 [2024-10-30 09:46:52.508613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.108 [2024-10-30 09:46:52.508625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:14.108 [2024-10-30 09:46:52.508882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:14.108 [2024-10-30 09:46:52.512661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.108 [2024-10-30 09:46:52.512683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.108 [2024-10-30 09:46:52.512948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.108 BaseBdev3 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.108 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.108 [ 00:12:14.108 { 00:12:14.108 "name": "BaseBdev3", 00:12:14.108 "aliases": [ 00:12:14.108 "6a8b9706-193a-4301-9a44-7269e5a71c4b" 00:12:14.108 ], 00:12:14.109 "product_name": "Malloc disk", 00:12:14.109 "block_size": 512, 00:12:14.109 "num_blocks": 65536, 00:12:14.109 "uuid": "6a8b9706-193a-4301-9a44-7269e5a71c4b", 00:12:14.109 "assigned_rate_limits": { 00:12:14.109 "rw_ios_per_sec": 0, 00:12:14.109 "rw_mbytes_per_sec": 0, 00:12:14.109 "r_mbytes_per_sec": 0, 00:12:14.109 "w_mbytes_per_sec": 0 00:12:14.109 }, 00:12:14.109 "claimed": true, 00:12:14.109 "claim_type": "exclusive_write", 00:12:14.109 "zoned": false, 00:12:14.109 "supported_io_types": { 00:12:14.109 "read": true, 00:12:14.109 "write": true, 00:12:14.109 "unmap": true, 00:12:14.109 "flush": true, 00:12:14.109 "reset": true, 00:12:14.109 "nvme_admin": false, 00:12:14.109 "nvme_io": false, 00:12:14.109 "nvme_io_md": false, 00:12:14.109 "write_zeroes": true, 00:12:14.109 "zcopy": true, 00:12:14.109 "get_zone_info": false, 00:12:14.109 "zone_management": false, 00:12:14.109 "zone_append": false, 00:12:14.109 "compare": false, 00:12:14.109 "compare_and_write": false, 00:12:14.109 "abort": true, 00:12:14.109 "seek_hole": false, 00:12:14.109 "seek_data": false, 00:12:14.109 "copy": true, 00:12:14.109 "nvme_iov_md": false 00:12:14.109 }, 00:12:14.109 "memory_domains": [ 00:12:14.109 { 00:12:14.109 "dma_device_id": "system", 00:12:14.109 "dma_device_type": 1 00:12:14.109 }, 00:12:14.109 { 00:12:14.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.109 "dma_device_type": 2 00:12:14.109 } 00:12:14.109 ], 00:12:14.109 "driver_specific": {} 00:12:14.109 } 00:12:14.109 ] 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.109 "name": "Existed_Raid", 00:12:14.109 "uuid": "a502d3d7-ef0f-4573-a7e8-b8f494251dbe", 00:12:14.109 "strip_size_kb": 64, 00:12:14.109 "state": "online", 00:12:14.109 "raid_level": "raid5f", 00:12:14.109 "superblock": false, 00:12:14.109 "num_base_bdevs": 3, 00:12:14.109 "num_base_bdevs_discovered": 3, 00:12:14.109 "num_base_bdevs_operational": 3, 00:12:14.109 "base_bdevs_list": [ 00:12:14.109 { 00:12:14.109 "name": "BaseBdev1", 00:12:14.109 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:14.109 "is_configured": true, 00:12:14.109 "data_offset": 0, 00:12:14.109 "data_size": 65536 00:12:14.109 }, 00:12:14.109 { 00:12:14.109 "name": "BaseBdev2", 00:12:14.109 "uuid": "a736f06f-df92-4121-a8e3-ae866f373d4a", 00:12:14.109 "is_configured": true, 00:12:14.109 "data_offset": 0, 00:12:14.109 "data_size": 65536 00:12:14.109 }, 00:12:14.109 { 00:12:14.109 "name": "BaseBdev3", 00:12:14.109 "uuid": "6a8b9706-193a-4301-9a44-7269e5a71c4b", 00:12:14.109 "is_configured": true, 00:12:14.109 "data_offset": 0, 00:12:14.109 "data_size": 65536 00:12:14.109 } 00:12:14.109 ] 00:12:14.109 }' 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.109 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.371 [2024-10-30 09:46:52.861205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.371 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.371 "name": "Existed_Raid", 00:12:14.371 "aliases": [ 00:12:14.371 "a502d3d7-ef0f-4573-a7e8-b8f494251dbe" 00:12:14.371 ], 00:12:14.371 "product_name": "Raid Volume", 00:12:14.371 "block_size": 512, 00:12:14.371 "num_blocks": 131072, 00:12:14.371 "uuid": "a502d3d7-ef0f-4573-a7e8-b8f494251dbe", 00:12:14.371 "assigned_rate_limits": { 00:12:14.371 "rw_ios_per_sec": 0, 00:12:14.371 "rw_mbytes_per_sec": 0, 00:12:14.371 "r_mbytes_per_sec": 0, 00:12:14.371 "w_mbytes_per_sec": 0 00:12:14.371 }, 00:12:14.371 "claimed": false, 00:12:14.371 "zoned": false, 00:12:14.371 "supported_io_types": { 00:12:14.371 "read": true, 00:12:14.371 "write": true, 00:12:14.371 "unmap": false, 00:12:14.371 "flush": false, 00:12:14.371 "reset": true, 00:12:14.371 "nvme_admin": false, 00:12:14.371 "nvme_io": false, 00:12:14.372 "nvme_io_md": false, 00:12:14.372 "write_zeroes": true, 00:12:14.372 "zcopy": false, 00:12:14.372 "get_zone_info": false, 00:12:14.372 "zone_management": false, 00:12:14.372 "zone_append": false, 00:12:14.372 "compare": false, 00:12:14.372 "compare_and_write": false, 00:12:14.372 "abort": false, 00:12:14.372 "seek_hole": false, 00:12:14.372 "seek_data": false, 00:12:14.372 "copy": false, 00:12:14.372 "nvme_iov_md": false 00:12:14.372 }, 00:12:14.372 "driver_specific": { 00:12:14.372 "raid": { 00:12:14.372 "uuid": "a502d3d7-ef0f-4573-a7e8-b8f494251dbe", 00:12:14.372 "strip_size_kb": 64, 00:12:14.372 "state": "online", 00:12:14.372 "raid_level": "raid5f", 00:12:14.372 "superblock": false, 00:12:14.372 "num_base_bdevs": 3, 00:12:14.372 "num_base_bdevs_discovered": 3, 00:12:14.372 "num_base_bdevs_operational": 3, 00:12:14.372 "base_bdevs_list": [ 00:12:14.372 { 00:12:14.372 "name": "BaseBdev1", 00:12:14.372 "uuid": "cd368365-9c54-41a0-ba89-0896e66befbc", 00:12:14.372 "is_configured": true, 00:12:14.372 "data_offset": 0, 00:12:14.372 "data_size": 65536 00:12:14.372 }, 00:12:14.372 { 00:12:14.372 "name": "BaseBdev2", 00:12:14.372 "uuid": "a736f06f-df92-4121-a8e3-ae866f373d4a", 00:12:14.372 "is_configured": true, 00:12:14.372 "data_offset": 0, 00:12:14.372 "data_size": 65536 00:12:14.372 }, 00:12:14.372 { 00:12:14.372 "name": "BaseBdev3", 00:12:14.372 "uuid": "6a8b9706-193a-4301-9a44-7269e5a71c4b", 00:12:14.372 "is_configured": true, 00:12:14.372 "data_offset": 0, 00:12:14.372 "data_size": 65536 00:12:14.372 } 00:12:14.372 ] 00:12:14.372 } 00:12:14.372 } 00:12:14.372 }' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:14.372 BaseBdev2 00:12:14.372 BaseBdev3' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.372 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.635 09:46:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.635 [2024-10-30 09:46:53.037047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.635 "name": "Existed_Raid", 00:12:14.635 "uuid": "a502d3d7-ef0f-4573-a7e8-b8f494251dbe", 00:12:14.635 "strip_size_kb": 64, 00:12:14.635 "state": "online", 00:12:14.635 "raid_level": "raid5f", 00:12:14.635 "superblock": false, 00:12:14.635 "num_base_bdevs": 3, 00:12:14.635 "num_base_bdevs_discovered": 2, 00:12:14.635 "num_base_bdevs_operational": 2, 00:12:14.635 "base_bdevs_list": [ 00:12:14.635 { 00:12:14.635 "name": null, 00:12:14.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.635 "is_configured": false, 00:12:14.635 "data_offset": 0, 00:12:14.635 "data_size": 65536 00:12:14.635 }, 00:12:14.635 { 00:12:14.635 "name": "BaseBdev2", 00:12:14.635 "uuid": "a736f06f-df92-4121-a8e3-ae866f373d4a", 00:12:14.635 "is_configured": true, 00:12:14.635 "data_offset": 0, 00:12:14.635 "data_size": 65536 00:12:14.635 }, 00:12:14.635 { 00:12:14.635 "name": "BaseBdev3", 00:12:14.635 "uuid": "6a8b9706-193a-4301-9a44-7269e5a71c4b", 00:12:14.635 "is_configured": true, 00:12:14.635 "data_offset": 0, 00:12:14.635 "data_size": 65536 00:12:14.635 } 00:12:14.635 ] 00:12:14.635 }' 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.635 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.970 [2024-10-30 09:46:53.419969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.970 [2024-10-30 09:46:53.420077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.970 [2024-10-30 09:46:53.477951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:14.970 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 [2024-10-30 09:46:53.514008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.232 [2024-10-30 09:46:53.514053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 BaseBdev2 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 [ 00:12:15.232 { 00:12:15.232 "name": "BaseBdev2", 00:12:15.232 "aliases": [ 00:12:15.232 "5300d332-ce10-485e-90f2-1931c6d51f4f" 00:12:15.232 ], 00:12:15.232 "product_name": "Malloc disk", 00:12:15.232 "block_size": 512, 00:12:15.232 "num_blocks": 65536, 00:12:15.232 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:15.232 "assigned_rate_limits": { 00:12:15.232 "rw_ios_per_sec": 0, 00:12:15.232 "rw_mbytes_per_sec": 0, 00:12:15.232 "r_mbytes_per_sec": 0, 00:12:15.232 "w_mbytes_per_sec": 0 00:12:15.232 }, 00:12:15.232 "claimed": false, 00:12:15.232 "zoned": false, 00:12:15.232 "supported_io_types": { 00:12:15.232 "read": true, 00:12:15.232 "write": true, 00:12:15.232 "unmap": true, 00:12:15.232 "flush": true, 00:12:15.232 "reset": true, 00:12:15.232 "nvme_admin": false, 00:12:15.232 "nvme_io": false, 00:12:15.232 "nvme_io_md": false, 00:12:15.232 "write_zeroes": true, 00:12:15.232 "zcopy": true, 00:12:15.232 "get_zone_info": false, 00:12:15.232 "zone_management": false, 00:12:15.232 "zone_append": false, 00:12:15.232 "compare": false, 00:12:15.232 "compare_and_write": false, 00:12:15.232 "abort": true, 00:12:15.232 "seek_hole": false, 00:12:15.232 "seek_data": false, 00:12:15.232 "copy": true, 00:12:15.232 "nvme_iov_md": false 00:12:15.232 }, 00:12:15.232 "memory_domains": [ 00:12:15.232 { 00:12:15.232 "dma_device_id": "system", 00:12:15.232 "dma_device_type": 1 00:12:15.232 }, 00:12:15.232 { 00:12:15.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.232 "dma_device_type": 2 00:12:15.232 } 00:12:15.232 ], 00:12:15.232 "driver_specific": {} 00:12:15.232 } 00:12:15.232 ] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 BaseBdev3 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.232 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.232 [ 00:12:15.232 { 00:12:15.232 "name": "BaseBdev3", 00:12:15.232 "aliases": [ 00:12:15.232 "f38858af-7061-4aec-b1d4-2f24faf0ce72" 00:12:15.232 ], 00:12:15.232 "product_name": "Malloc disk", 00:12:15.232 "block_size": 512, 00:12:15.232 "num_blocks": 65536, 00:12:15.232 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:15.232 "assigned_rate_limits": { 00:12:15.232 "rw_ios_per_sec": 0, 00:12:15.232 "rw_mbytes_per_sec": 0, 00:12:15.232 "r_mbytes_per_sec": 0, 00:12:15.232 "w_mbytes_per_sec": 0 00:12:15.232 }, 00:12:15.232 "claimed": false, 00:12:15.232 "zoned": false, 00:12:15.232 "supported_io_types": { 00:12:15.232 "read": true, 00:12:15.232 "write": true, 00:12:15.232 "unmap": true, 00:12:15.232 "flush": true, 00:12:15.232 "reset": true, 00:12:15.232 "nvme_admin": false, 00:12:15.232 "nvme_io": false, 00:12:15.232 "nvme_io_md": false, 00:12:15.232 "write_zeroes": true, 00:12:15.232 "zcopy": true, 00:12:15.232 "get_zone_info": false, 00:12:15.232 "zone_management": false, 00:12:15.232 "zone_append": false, 00:12:15.232 "compare": false, 00:12:15.232 "compare_and_write": false, 00:12:15.232 "abort": true, 00:12:15.233 "seek_hole": false, 00:12:15.233 "seek_data": false, 00:12:15.233 "copy": true, 00:12:15.233 "nvme_iov_md": false 00:12:15.233 }, 00:12:15.233 "memory_domains": [ 00:12:15.233 { 00:12:15.233 "dma_device_id": "system", 00:12:15.233 "dma_device_type": 1 00:12:15.233 }, 00:12:15.233 { 00:12:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.233 "dma_device_type": 2 00:12:15.233 } 00:12:15.233 ], 00:12:15.233 "driver_specific": {} 00:12:15.233 } 00:12:15.233 ] 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.233 [2024-10-30 09:46:53.720283] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.233 [2024-10-30 09:46:53.720322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.233 [2024-10-30 09:46:53.720340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.233 [2024-10-30 09:46:53.722168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.233 "name": "Existed_Raid", 00:12:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.233 "strip_size_kb": 64, 00:12:15.233 "state": "configuring", 00:12:15.233 "raid_level": "raid5f", 00:12:15.233 "superblock": false, 00:12:15.233 "num_base_bdevs": 3, 00:12:15.233 "num_base_bdevs_discovered": 2, 00:12:15.233 "num_base_bdevs_operational": 3, 00:12:15.233 "base_bdevs_list": [ 00:12:15.233 { 00:12:15.233 "name": "BaseBdev1", 00:12:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.233 "is_configured": false, 00:12:15.233 "data_offset": 0, 00:12:15.233 "data_size": 0 00:12:15.233 }, 00:12:15.233 { 00:12:15.233 "name": "BaseBdev2", 00:12:15.233 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:15.233 "is_configured": true, 00:12:15.233 "data_offset": 0, 00:12:15.233 "data_size": 65536 00:12:15.233 }, 00:12:15.233 { 00:12:15.233 "name": "BaseBdev3", 00:12:15.233 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:15.233 "is_configured": true, 00:12:15.233 "data_offset": 0, 00:12:15.233 "data_size": 65536 00:12:15.233 } 00:12:15.233 ] 00:12:15.233 }' 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.233 09:46:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.494 [2024-10-30 09:46:54.048349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.494 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.494 "name": "Existed_Raid", 00:12:15.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.494 "strip_size_kb": 64, 00:12:15.494 "state": "configuring", 00:12:15.494 "raid_level": "raid5f", 00:12:15.494 "superblock": false, 00:12:15.494 "num_base_bdevs": 3, 00:12:15.494 "num_base_bdevs_discovered": 1, 00:12:15.494 "num_base_bdevs_operational": 3, 00:12:15.495 "base_bdevs_list": [ 00:12:15.495 { 00:12:15.495 "name": "BaseBdev1", 00:12:15.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.495 "is_configured": false, 00:12:15.495 "data_offset": 0, 00:12:15.495 "data_size": 0 00:12:15.495 }, 00:12:15.495 { 00:12:15.495 "name": null, 00:12:15.495 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:15.495 "is_configured": false, 00:12:15.495 "data_offset": 0, 00:12:15.495 "data_size": 65536 00:12:15.495 }, 00:12:15.495 { 00:12:15.495 "name": "BaseBdev3", 00:12:15.495 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:15.495 "is_configured": true, 00:12:15.495 "data_offset": 0, 00:12:15.495 "data_size": 65536 00:12:15.495 } 00:12:15.495 ] 00:12:15.495 }' 00:12:15.495 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.495 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.756 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.018 [2024-10-30 09:46:54.398840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.018 BaseBdev1 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.018 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.018 [ 00:12:16.018 { 00:12:16.018 "name": "BaseBdev1", 00:12:16.018 "aliases": [ 00:12:16.018 "f38d9909-8682-4af8-81ae-4728b366fdf5" 00:12:16.018 ], 00:12:16.019 "product_name": "Malloc disk", 00:12:16.019 "block_size": 512, 00:12:16.019 "num_blocks": 65536, 00:12:16.019 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:16.019 "assigned_rate_limits": { 00:12:16.019 "rw_ios_per_sec": 0, 00:12:16.019 "rw_mbytes_per_sec": 0, 00:12:16.019 "r_mbytes_per_sec": 0, 00:12:16.019 "w_mbytes_per_sec": 0 00:12:16.019 }, 00:12:16.019 "claimed": true, 00:12:16.019 "claim_type": "exclusive_write", 00:12:16.019 "zoned": false, 00:12:16.019 "supported_io_types": { 00:12:16.019 "read": true, 00:12:16.019 "write": true, 00:12:16.019 "unmap": true, 00:12:16.019 "flush": true, 00:12:16.019 "reset": true, 00:12:16.019 "nvme_admin": false, 00:12:16.019 "nvme_io": false, 00:12:16.019 "nvme_io_md": false, 00:12:16.019 "write_zeroes": true, 00:12:16.019 "zcopy": true, 00:12:16.019 "get_zone_info": false, 00:12:16.019 "zone_management": false, 00:12:16.019 "zone_append": false, 00:12:16.019 "compare": false, 00:12:16.019 "compare_and_write": false, 00:12:16.019 "abort": true, 00:12:16.019 "seek_hole": false, 00:12:16.019 "seek_data": false, 00:12:16.019 "copy": true, 00:12:16.019 "nvme_iov_md": false 00:12:16.019 }, 00:12:16.019 "memory_domains": [ 00:12:16.019 { 00:12:16.019 "dma_device_id": "system", 00:12:16.019 "dma_device_type": 1 00:12:16.019 }, 00:12:16.019 { 00:12:16.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.019 "dma_device_type": 2 00:12:16.019 } 00:12:16.019 ], 00:12:16.019 "driver_specific": {} 00:12:16.019 } 00:12:16.019 ] 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.019 "name": "Existed_Raid", 00:12:16.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.019 "strip_size_kb": 64, 00:12:16.019 "state": "configuring", 00:12:16.019 "raid_level": "raid5f", 00:12:16.019 "superblock": false, 00:12:16.019 "num_base_bdevs": 3, 00:12:16.019 "num_base_bdevs_discovered": 2, 00:12:16.019 "num_base_bdevs_operational": 3, 00:12:16.019 "base_bdevs_list": [ 00:12:16.019 { 00:12:16.019 "name": "BaseBdev1", 00:12:16.019 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:16.019 "is_configured": true, 00:12:16.019 "data_offset": 0, 00:12:16.019 "data_size": 65536 00:12:16.019 }, 00:12:16.019 { 00:12:16.019 "name": null, 00:12:16.019 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:16.019 "is_configured": false, 00:12:16.019 "data_offset": 0, 00:12:16.019 "data_size": 65536 00:12:16.019 }, 00:12:16.019 { 00:12:16.019 "name": "BaseBdev3", 00:12:16.019 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:16.019 "is_configured": true, 00:12:16.019 "data_offset": 0, 00:12:16.019 "data_size": 65536 00:12:16.019 } 00:12:16.019 ] 00:12:16.019 }' 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.019 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.281 [2024-10-30 09:46:54.758979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.281 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.281 "name": "Existed_Raid", 00:12:16.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.281 "strip_size_kb": 64, 00:12:16.281 "state": "configuring", 00:12:16.281 "raid_level": "raid5f", 00:12:16.281 "superblock": false, 00:12:16.281 "num_base_bdevs": 3, 00:12:16.281 "num_base_bdevs_discovered": 1, 00:12:16.281 "num_base_bdevs_operational": 3, 00:12:16.281 "base_bdevs_list": [ 00:12:16.281 { 00:12:16.281 "name": "BaseBdev1", 00:12:16.281 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:16.281 "is_configured": true, 00:12:16.281 "data_offset": 0, 00:12:16.281 "data_size": 65536 00:12:16.281 }, 00:12:16.281 { 00:12:16.281 "name": null, 00:12:16.282 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:16.282 "is_configured": false, 00:12:16.282 "data_offset": 0, 00:12:16.282 "data_size": 65536 00:12:16.282 }, 00:12:16.282 { 00:12:16.282 "name": null, 00:12:16.282 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:16.282 "is_configured": false, 00:12:16.282 "data_offset": 0, 00:12:16.282 "data_size": 65536 00:12:16.282 } 00:12:16.282 ] 00:12:16.282 }' 00:12:16.282 09:46:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.282 09:46:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.541 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.541 [2024-10-30 09:46:55.099067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.542 "name": "Existed_Raid", 00:12:16.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.542 "strip_size_kb": 64, 00:12:16.542 "state": "configuring", 00:12:16.542 "raid_level": "raid5f", 00:12:16.542 "superblock": false, 00:12:16.542 "num_base_bdevs": 3, 00:12:16.542 "num_base_bdevs_discovered": 2, 00:12:16.542 "num_base_bdevs_operational": 3, 00:12:16.542 "base_bdevs_list": [ 00:12:16.542 { 00:12:16.542 "name": "BaseBdev1", 00:12:16.542 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:16.542 "is_configured": true, 00:12:16.542 "data_offset": 0, 00:12:16.542 "data_size": 65536 00:12:16.542 }, 00:12:16.542 { 00:12:16.542 "name": null, 00:12:16.542 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:16.542 "is_configured": false, 00:12:16.542 "data_offset": 0, 00:12:16.542 "data_size": 65536 00:12:16.542 }, 00:12:16.542 { 00:12:16.542 "name": "BaseBdev3", 00:12:16.542 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:16.542 "is_configured": true, 00:12:16.542 "data_offset": 0, 00:12:16.542 "data_size": 65536 00:12:16.542 } 00:12:16.542 ] 00:12:16.542 }' 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.542 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.800 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.800 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.800 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.800 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.800 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.057 [2024-10-30 09:46:55.435134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.057 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.057 "name": "Existed_Raid", 00:12:17.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.057 "strip_size_kb": 64, 00:12:17.057 "state": "configuring", 00:12:17.057 "raid_level": "raid5f", 00:12:17.057 "superblock": false, 00:12:17.057 "num_base_bdevs": 3, 00:12:17.057 "num_base_bdevs_discovered": 1, 00:12:17.057 "num_base_bdevs_operational": 3, 00:12:17.057 "base_bdevs_list": [ 00:12:17.057 { 00:12:17.057 "name": null, 00:12:17.057 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:17.058 "is_configured": false, 00:12:17.058 "data_offset": 0, 00:12:17.058 "data_size": 65536 00:12:17.058 }, 00:12:17.058 { 00:12:17.058 "name": null, 00:12:17.058 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:17.058 "is_configured": false, 00:12:17.058 "data_offset": 0, 00:12:17.058 "data_size": 65536 00:12:17.058 }, 00:12:17.058 { 00:12:17.058 "name": "BaseBdev3", 00:12:17.058 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:17.058 "is_configured": true, 00:12:17.058 "data_offset": 0, 00:12:17.058 "data_size": 65536 00:12:17.058 } 00:12:17.058 ] 00:12:17.058 }' 00:12:17.058 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.058 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.315 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.316 [2024-10-30 09:46:55.817647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.316 "name": "Existed_Raid", 00:12:17.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.316 "strip_size_kb": 64, 00:12:17.316 "state": "configuring", 00:12:17.316 "raid_level": "raid5f", 00:12:17.316 "superblock": false, 00:12:17.316 "num_base_bdevs": 3, 00:12:17.316 "num_base_bdevs_discovered": 2, 00:12:17.316 "num_base_bdevs_operational": 3, 00:12:17.316 "base_bdevs_list": [ 00:12:17.316 { 00:12:17.316 "name": null, 00:12:17.316 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:17.316 "is_configured": false, 00:12:17.316 "data_offset": 0, 00:12:17.316 "data_size": 65536 00:12:17.316 }, 00:12:17.316 { 00:12:17.316 "name": "BaseBdev2", 00:12:17.316 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:17.316 "is_configured": true, 00:12:17.316 "data_offset": 0, 00:12:17.316 "data_size": 65536 00:12:17.316 }, 00:12:17.316 { 00:12:17.316 "name": "BaseBdev3", 00:12:17.316 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:17.316 "is_configured": true, 00:12:17.316 "data_offset": 0, 00:12:17.316 "data_size": 65536 00:12:17.316 } 00:12:17.316 ] 00:12:17.316 }' 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.316 09:46:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f38d9909-8682-4af8-81ae-4728b366fdf5 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.574 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.833 [2024-10-30 09:46:56.199628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:17.833 [2024-10-30 09:46:56.199663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:17.833 [2024-10-30 09:46:56.199671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:17.833 [2024-10-30 09:46:56.199860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:17.833 [2024-10-30 09:46:56.202691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:17.833 [2024-10-30 09:46:56.202710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:17.833 [2024-10-30 09:46:56.202884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.833 NewBaseBdev 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.833 [ 00:12:17.833 { 00:12:17.833 "name": "NewBaseBdev", 00:12:17.833 "aliases": [ 00:12:17.833 "f38d9909-8682-4af8-81ae-4728b366fdf5" 00:12:17.833 ], 00:12:17.833 "product_name": "Malloc disk", 00:12:17.833 "block_size": 512, 00:12:17.833 "num_blocks": 65536, 00:12:17.833 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:17.833 "assigned_rate_limits": { 00:12:17.833 "rw_ios_per_sec": 0, 00:12:17.833 "rw_mbytes_per_sec": 0, 00:12:17.833 "r_mbytes_per_sec": 0, 00:12:17.833 "w_mbytes_per_sec": 0 00:12:17.833 }, 00:12:17.833 "claimed": true, 00:12:17.833 "claim_type": "exclusive_write", 00:12:17.833 "zoned": false, 00:12:17.833 "supported_io_types": { 00:12:17.833 "read": true, 00:12:17.833 "write": true, 00:12:17.833 "unmap": true, 00:12:17.833 "flush": true, 00:12:17.833 "reset": true, 00:12:17.833 "nvme_admin": false, 00:12:17.833 "nvme_io": false, 00:12:17.833 "nvme_io_md": false, 00:12:17.833 "write_zeroes": true, 00:12:17.833 "zcopy": true, 00:12:17.833 "get_zone_info": false, 00:12:17.833 "zone_management": false, 00:12:17.833 "zone_append": false, 00:12:17.833 "compare": false, 00:12:17.833 "compare_and_write": false, 00:12:17.833 "abort": true, 00:12:17.833 "seek_hole": false, 00:12:17.833 "seek_data": false, 00:12:17.833 "copy": true, 00:12:17.833 "nvme_iov_md": false 00:12:17.833 }, 00:12:17.833 "memory_domains": [ 00:12:17.833 { 00:12:17.833 "dma_device_id": "system", 00:12:17.833 "dma_device_type": 1 00:12:17.833 }, 00:12:17.833 { 00:12:17.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.833 "dma_device_type": 2 00:12:17.833 } 00:12:17.833 ], 00:12:17.833 "driver_specific": {} 00:12:17.833 } 00:12:17.833 ] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.833 "name": "Existed_Raid", 00:12:17.833 "uuid": "1aab17de-7a1a-436d-b24d-50183f5e2df3", 00:12:17.833 "strip_size_kb": 64, 00:12:17.833 "state": "online", 00:12:17.833 "raid_level": "raid5f", 00:12:17.833 "superblock": false, 00:12:17.833 "num_base_bdevs": 3, 00:12:17.833 "num_base_bdevs_discovered": 3, 00:12:17.833 "num_base_bdevs_operational": 3, 00:12:17.833 "base_bdevs_list": [ 00:12:17.833 { 00:12:17.833 "name": "NewBaseBdev", 00:12:17.833 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:17.833 "is_configured": true, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 65536 00:12:17.833 }, 00:12:17.833 { 00:12:17.833 "name": "BaseBdev2", 00:12:17.833 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:17.833 "is_configured": true, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 65536 00:12:17.833 }, 00:12:17.833 { 00:12:17.833 "name": "BaseBdev3", 00:12:17.833 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:17.833 "is_configured": true, 00:12:17.833 "data_offset": 0, 00:12:17.833 "data_size": 65536 00:12:17.833 } 00:12:17.833 ] 00:12:17.833 }' 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.833 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 [2024-10-30 09:46:56.530311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.091 "name": "Existed_Raid", 00:12:18.091 "aliases": [ 00:12:18.091 "1aab17de-7a1a-436d-b24d-50183f5e2df3" 00:12:18.091 ], 00:12:18.091 "product_name": "Raid Volume", 00:12:18.091 "block_size": 512, 00:12:18.091 "num_blocks": 131072, 00:12:18.091 "uuid": "1aab17de-7a1a-436d-b24d-50183f5e2df3", 00:12:18.091 "assigned_rate_limits": { 00:12:18.091 "rw_ios_per_sec": 0, 00:12:18.091 "rw_mbytes_per_sec": 0, 00:12:18.091 "r_mbytes_per_sec": 0, 00:12:18.091 "w_mbytes_per_sec": 0 00:12:18.091 }, 00:12:18.091 "claimed": false, 00:12:18.091 "zoned": false, 00:12:18.091 "supported_io_types": { 00:12:18.091 "read": true, 00:12:18.091 "write": true, 00:12:18.091 "unmap": false, 00:12:18.091 "flush": false, 00:12:18.091 "reset": true, 00:12:18.091 "nvme_admin": false, 00:12:18.091 "nvme_io": false, 00:12:18.091 "nvme_io_md": false, 00:12:18.091 "write_zeroes": true, 00:12:18.091 "zcopy": false, 00:12:18.091 "get_zone_info": false, 00:12:18.091 "zone_management": false, 00:12:18.091 "zone_append": false, 00:12:18.091 "compare": false, 00:12:18.091 "compare_and_write": false, 00:12:18.091 "abort": false, 00:12:18.091 "seek_hole": false, 00:12:18.091 "seek_data": false, 00:12:18.091 "copy": false, 00:12:18.091 "nvme_iov_md": false 00:12:18.091 }, 00:12:18.091 "driver_specific": { 00:12:18.091 "raid": { 00:12:18.091 "uuid": "1aab17de-7a1a-436d-b24d-50183f5e2df3", 00:12:18.091 "strip_size_kb": 64, 00:12:18.091 "state": "online", 00:12:18.091 "raid_level": "raid5f", 00:12:18.091 "superblock": false, 00:12:18.091 "num_base_bdevs": 3, 00:12:18.091 "num_base_bdevs_discovered": 3, 00:12:18.091 "num_base_bdevs_operational": 3, 00:12:18.091 "base_bdevs_list": [ 00:12:18.091 { 00:12:18.091 "name": "NewBaseBdev", 00:12:18.091 "uuid": "f38d9909-8682-4af8-81ae-4728b366fdf5", 00:12:18.091 "is_configured": true, 00:12:18.091 "data_offset": 0, 00:12:18.091 "data_size": 65536 00:12:18.091 }, 00:12:18.091 { 00:12:18.091 "name": "BaseBdev2", 00:12:18.091 "uuid": "5300d332-ce10-485e-90f2-1931c6d51f4f", 00:12:18.091 "is_configured": true, 00:12:18.091 "data_offset": 0, 00:12:18.091 "data_size": 65536 00:12:18.091 }, 00:12:18.091 { 00:12:18.091 "name": "BaseBdev3", 00:12:18.091 "uuid": "f38858af-7061-4aec-b1d4-2f24faf0ce72", 00:12:18.091 "is_configured": true, 00:12:18.091 "data_offset": 0, 00:12:18.091 "data_size": 65536 00:12:18.091 } 00:12:18.091 ] 00:12:18.091 } 00:12:18.091 } 00:12:18.091 }' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:18.091 BaseBdev2 00:12:18.091 BaseBdev3' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.349 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.349 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.350 [2024-10-30 09:46:56.718194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.350 [2024-10-30 09:46:56.718217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.350 [2024-10-30 09:46:56.718274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.350 [2024-10-30 09:46:56.718493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.350 [2024-10-30 09:46:56.718509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77744 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 77744 ']' 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 77744 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77744 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:18.350 killing process with pid 77744 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77744' 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 77744 00:12:18.350 [2024-10-30 09:46:56.748161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.350 09:46:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 77744 00:12:18.350 [2024-10-30 09:46:56.894375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:18.916 00:12:18.916 real 0m7.287s 00:12:18.916 user 0m11.687s 00:12:18.916 sys 0m1.210s 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.916 ************************************ 00:12:18.916 END TEST raid5f_state_function_test 00:12:18.916 ************************************ 00:12:18.916 09:46:57 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:18.916 09:46:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:18.916 09:46:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.916 09:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.916 ************************************ 00:12:18.916 START TEST raid5f_state_function_test_sb 00:12:18.916 ************************************ 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:18.916 Process raid pid: 78333 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78333 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78333' 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78333 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78333 ']' 00:12:18.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.916 09:46:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.174 [2024-10-30 09:46:57.578629] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:12:19.174 [2024-10-30 09:46:57.578717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.174 [2024-10-30 09:46:57.727855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.433 [2024-10-30 09:46:57.808196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.433 [2024-10-30 09:46:57.915576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.433 [2024-10-30 09:46:57.915614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.999 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.999 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:19.999 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:19.999 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.999 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.999 [2024-10-30 09:46:58.396539] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.999 [2024-10-30 09:46:58.396579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.000 [2024-10-30 09:46:58.396587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.000 [2024-10-30 09:46:58.396594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.000 [2024-10-30 09:46:58.396599] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.000 [2024-10-30 09:46:58.396606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.000 "name": "Existed_Raid", 00:12:20.000 "uuid": "f3f8eacc-c144-47bf-b466-439bc0f5c30a", 00:12:20.000 "strip_size_kb": 64, 00:12:20.000 "state": "configuring", 00:12:20.000 "raid_level": "raid5f", 00:12:20.000 "superblock": true, 00:12:20.000 "num_base_bdevs": 3, 00:12:20.000 "num_base_bdevs_discovered": 0, 00:12:20.000 "num_base_bdevs_operational": 3, 00:12:20.000 "base_bdevs_list": [ 00:12:20.000 { 00:12:20.000 "name": "BaseBdev1", 00:12:20.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.000 "is_configured": false, 00:12:20.000 "data_offset": 0, 00:12:20.000 "data_size": 0 00:12:20.000 }, 00:12:20.000 { 00:12:20.000 "name": "BaseBdev2", 00:12:20.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.000 "is_configured": false, 00:12:20.000 "data_offset": 0, 00:12:20.000 "data_size": 0 00:12:20.000 }, 00:12:20.000 { 00:12:20.000 "name": "BaseBdev3", 00:12:20.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.000 "is_configured": false, 00:12:20.000 "data_offset": 0, 00:12:20.000 "data_size": 0 00:12:20.000 } 00:12:20.000 ] 00:12:20.000 }' 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.000 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 [2024-10-30 09:46:58.708557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.258 [2024-10-30 09:46:58.708586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 [2024-10-30 09:46:58.716567] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.258 [2024-10-30 09:46:58.716601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.258 [2024-10-30 09:46:58.716607] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.258 [2024-10-30 09:46:58.716614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.258 [2024-10-30 09:46:58.716619] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.258 [2024-10-30 09:46:58.716625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 [2024-10-30 09:46:58.744094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.258 BaseBdev1 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 [ 00:12:20.258 { 00:12:20.258 "name": "BaseBdev1", 00:12:20.258 "aliases": [ 00:12:20.258 "899a1b6c-42f0-481c-adcd-4e65589685d8" 00:12:20.258 ], 00:12:20.258 "product_name": "Malloc disk", 00:12:20.258 "block_size": 512, 00:12:20.258 "num_blocks": 65536, 00:12:20.258 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:20.258 "assigned_rate_limits": { 00:12:20.258 "rw_ios_per_sec": 0, 00:12:20.258 "rw_mbytes_per_sec": 0, 00:12:20.258 "r_mbytes_per_sec": 0, 00:12:20.258 "w_mbytes_per_sec": 0 00:12:20.258 }, 00:12:20.258 "claimed": true, 00:12:20.258 "claim_type": "exclusive_write", 00:12:20.258 "zoned": false, 00:12:20.258 "supported_io_types": { 00:12:20.258 "read": true, 00:12:20.258 "write": true, 00:12:20.258 "unmap": true, 00:12:20.258 "flush": true, 00:12:20.258 "reset": true, 00:12:20.258 "nvme_admin": false, 00:12:20.258 "nvme_io": false, 00:12:20.258 "nvme_io_md": false, 00:12:20.258 "write_zeroes": true, 00:12:20.258 "zcopy": true, 00:12:20.258 "get_zone_info": false, 00:12:20.258 "zone_management": false, 00:12:20.258 "zone_append": false, 00:12:20.258 "compare": false, 00:12:20.258 "compare_and_write": false, 00:12:20.258 "abort": true, 00:12:20.258 "seek_hole": false, 00:12:20.258 "seek_data": false, 00:12:20.258 "copy": true, 00:12:20.258 "nvme_iov_md": false 00:12:20.258 }, 00:12:20.258 "memory_domains": [ 00:12:20.258 { 00:12:20.258 "dma_device_id": "system", 00:12:20.258 "dma_device_type": 1 00:12:20.258 }, 00:12:20.258 { 00:12:20.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.258 "dma_device_type": 2 00:12:20.258 } 00:12:20.258 ], 00:12:20.258 "driver_specific": {} 00:12:20.258 } 00:12:20.258 ] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.258 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.259 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.259 "name": "Existed_Raid", 00:12:20.259 "uuid": "ca423527-2205-412e-a3fa-66ce20a308b5", 00:12:20.259 "strip_size_kb": 64, 00:12:20.259 "state": "configuring", 00:12:20.259 "raid_level": "raid5f", 00:12:20.259 "superblock": true, 00:12:20.259 "num_base_bdevs": 3, 00:12:20.259 "num_base_bdevs_discovered": 1, 00:12:20.259 "num_base_bdevs_operational": 3, 00:12:20.259 "base_bdevs_list": [ 00:12:20.259 { 00:12:20.259 "name": "BaseBdev1", 00:12:20.259 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:20.259 "is_configured": true, 00:12:20.259 "data_offset": 2048, 00:12:20.259 "data_size": 63488 00:12:20.259 }, 00:12:20.259 { 00:12:20.259 "name": "BaseBdev2", 00:12:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.259 "is_configured": false, 00:12:20.259 "data_offset": 0, 00:12:20.259 "data_size": 0 00:12:20.259 }, 00:12:20.259 { 00:12:20.259 "name": "BaseBdev3", 00:12:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.259 "is_configured": false, 00:12:20.259 "data_offset": 0, 00:12:20.259 "data_size": 0 00:12:20.259 } 00:12:20.259 ] 00:12:20.259 }' 00:12:20.259 09:46:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.259 09:46:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.517 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 [2024-10-30 09:46:59.072183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.518 [2024-10-30 09:46:59.072319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 [2024-10-30 09:46:59.080233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.518 [2024-10-30 09:46:59.081793] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.518 [2024-10-30 09:46:59.081898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.518 [2024-10-30 09:46:59.081948] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.518 [2024-10-30 09:46:59.081969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.518 "name": "Existed_Raid", 00:12:20.518 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:20.518 "strip_size_kb": 64, 00:12:20.518 "state": "configuring", 00:12:20.518 "raid_level": "raid5f", 00:12:20.518 "superblock": true, 00:12:20.518 "num_base_bdevs": 3, 00:12:20.518 "num_base_bdevs_discovered": 1, 00:12:20.518 "num_base_bdevs_operational": 3, 00:12:20.518 "base_bdevs_list": [ 00:12:20.518 { 00:12:20.518 "name": "BaseBdev1", 00:12:20.518 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:20.518 "is_configured": true, 00:12:20.518 "data_offset": 2048, 00:12:20.518 "data_size": 63488 00:12:20.518 }, 00:12:20.518 { 00:12:20.518 "name": "BaseBdev2", 00:12:20.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.518 "is_configured": false, 00:12:20.518 "data_offset": 0, 00:12:20.518 "data_size": 0 00:12:20.518 }, 00:12:20.518 { 00:12:20.518 "name": "BaseBdev3", 00:12:20.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.518 "is_configured": false, 00:12:20.518 "data_offset": 0, 00:12:20.518 "data_size": 0 00:12:20.518 } 00:12:20.518 ] 00:12:20.518 }' 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.518 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.776 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.776 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.776 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.034 [2024-10-30 09:46:59.410259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.034 BaseBdev2 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.034 [ 00:12:21.034 { 00:12:21.034 "name": "BaseBdev2", 00:12:21.034 "aliases": [ 00:12:21.034 "94215b2c-8068-4c5b-bcae-7b819ba06246" 00:12:21.034 ], 00:12:21.034 "product_name": "Malloc disk", 00:12:21.034 "block_size": 512, 00:12:21.034 "num_blocks": 65536, 00:12:21.034 "uuid": "94215b2c-8068-4c5b-bcae-7b819ba06246", 00:12:21.034 "assigned_rate_limits": { 00:12:21.034 "rw_ios_per_sec": 0, 00:12:21.034 "rw_mbytes_per_sec": 0, 00:12:21.034 "r_mbytes_per_sec": 0, 00:12:21.034 "w_mbytes_per_sec": 0 00:12:21.034 }, 00:12:21.034 "claimed": true, 00:12:21.034 "claim_type": "exclusive_write", 00:12:21.034 "zoned": false, 00:12:21.034 "supported_io_types": { 00:12:21.034 "read": true, 00:12:21.034 "write": true, 00:12:21.034 "unmap": true, 00:12:21.034 "flush": true, 00:12:21.034 "reset": true, 00:12:21.034 "nvme_admin": false, 00:12:21.034 "nvme_io": false, 00:12:21.034 "nvme_io_md": false, 00:12:21.034 "write_zeroes": true, 00:12:21.034 "zcopy": true, 00:12:21.034 "get_zone_info": false, 00:12:21.034 "zone_management": false, 00:12:21.034 "zone_append": false, 00:12:21.034 "compare": false, 00:12:21.034 "compare_and_write": false, 00:12:21.034 "abort": true, 00:12:21.034 "seek_hole": false, 00:12:21.034 "seek_data": false, 00:12:21.034 "copy": true, 00:12:21.034 "nvme_iov_md": false 00:12:21.034 }, 00:12:21.034 "memory_domains": [ 00:12:21.034 { 00:12:21.034 "dma_device_id": "system", 00:12:21.034 "dma_device_type": 1 00:12:21.034 }, 00:12:21.034 { 00:12:21.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.034 "dma_device_type": 2 00:12:21.034 } 00:12:21.034 ], 00:12:21.034 "driver_specific": {} 00:12:21.034 } 00:12:21.034 ] 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.034 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.035 "name": "Existed_Raid", 00:12:21.035 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:21.035 "strip_size_kb": 64, 00:12:21.035 "state": "configuring", 00:12:21.035 "raid_level": "raid5f", 00:12:21.035 "superblock": true, 00:12:21.035 "num_base_bdevs": 3, 00:12:21.035 "num_base_bdevs_discovered": 2, 00:12:21.035 "num_base_bdevs_operational": 3, 00:12:21.035 "base_bdevs_list": [ 00:12:21.035 { 00:12:21.035 "name": "BaseBdev1", 00:12:21.035 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:21.035 "is_configured": true, 00:12:21.035 "data_offset": 2048, 00:12:21.035 "data_size": 63488 00:12:21.035 }, 00:12:21.035 { 00:12:21.035 "name": "BaseBdev2", 00:12:21.035 "uuid": "94215b2c-8068-4c5b-bcae-7b819ba06246", 00:12:21.035 "is_configured": true, 00:12:21.035 "data_offset": 2048, 00:12:21.035 "data_size": 63488 00:12:21.035 }, 00:12:21.035 { 00:12:21.035 "name": "BaseBdev3", 00:12:21.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.035 "is_configured": false, 00:12:21.035 "data_offset": 0, 00:12:21.035 "data_size": 0 00:12:21.035 } 00:12:21.035 ] 00:12:21.035 }' 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.035 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.321 [2024-10-30 09:46:59.783847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.321 [2024-10-30 09:46:59.784052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:21.321 [2024-10-30 09:46:59.784091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:21.321 [2024-10-30 09:46:59.784317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:21.321 BaseBdev3 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.321 [2024-10-30 09:46:59.787259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:21.321 [2024-10-30 09:46:59.787271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:21.321 [2024-10-30 09:46:59.787383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.321 [ 00:12:21.321 { 00:12:21.321 "name": "BaseBdev3", 00:12:21.321 "aliases": [ 00:12:21.321 "50f97276-8fac-4dea-a197-1e31fc558f00" 00:12:21.321 ], 00:12:21.321 "product_name": "Malloc disk", 00:12:21.321 "block_size": 512, 00:12:21.321 "num_blocks": 65536, 00:12:21.321 "uuid": "50f97276-8fac-4dea-a197-1e31fc558f00", 00:12:21.321 "assigned_rate_limits": { 00:12:21.321 "rw_ios_per_sec": 0, 00:12:21.321 "rw_mbytes_per_sec": 0, 00:12:21.321 "r_mbytes_per_sec": 0, 00:12:21.321 "w_mbytes_per_sec": 0 00:12:21.321 }, 00:12:21.321 "claimed": true, 00:12:21.321 "claim_type": "exclusive_write", 00:12:21.321 "zoned": false, 00:12:21.321 "supported_io_types": { 00:12:21.321 "read": true, 00:12:21.321 "write": true, 00:12:21.321 "unmap": true, 00:12:21.321 "flush": true, 00:12:21.321 "reset": true, 00:12:21.321 "nvme_admin": false, 00:12:21.321 "nvme_io": false, 00:12:21.321 "nvme_io_md": false, 00:12:21.321 "write_zeroes": true, 00:12:21.321 "zcopy": true, 00:12:21.321 "get_zone_info": false, 00:12:21.321 "zone_management": false, 00:12:21.321 "zone_append": false, 00:12:21.321 "compare": false, 00:12:21.321 "compare_and_write": false, 00:12:21.321 "abort": true, 00:12:21.321 "seek_hole": false, 00:12:21.321 "seek_data": false, 00:12:21.321 "copy": true, 00:12:21.321 "nvme_iov_md": false 00:12:21.321 }, 00:12:21.321 "memory_domains": [ 00:12:21.321 { 00:12:21.321 "dma_device_id": "system", 00:12:21.321 "dma_device_type": 1 00:12:21.321 }, 00:12:21.321 { 00:12:21.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.321 "dma_device_type": 2 00:12:21.321 } 00:12:21.321 ], 00:12:21.321 "driver_specific": {} 00:12:21.321 } 00:12:21.321 ] 00:12:21.321 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.322 "name": "Existed_Raid", 00:12:21.322 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:21.322 "strip_size_kb": 64, 00:12:21.322 "state": "online", 00:12:21.322 "raid_level": "raid5f", 00:12:21.322 "superblock": true, 00:12:21.322 "num_base_bdevs": 3, 00:12:21.322 "num_base_bdevs_discovered": 3, 00:12:21.322 "num_base_bdevs_operational": 3, 00:12:21.322 "base_bdevs_list": [ 00:12:21.322 { 00:12:21.322 "name": "BaseBdev1", 00:12:21.322 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:21.322 "is_configured": true, 00:12:21.322 "data_offset": 2048, 00:12:21.322 "data_size": 63488 00:12:21.322 }, 00:12:21.322 { 00:12:21.322 "name": "BaseBdev2", 00:12:21.322 "uuid": "94215b2c-8068-4c5b-bcae-7b819ba06246", 00:12:21.322 "is_configured": true, 00:12:21.322 "data_offset": 2048, 00:12:21.322 "data_size": 63488 00:12:21.322 }, 00:12:21.322 { 00:12:21.322 "name": "BaseBdev3", 00:12:21.322 "uuid": "50f97276-8fac-4dea-a197-1e31fc558f00", 00:12:21.322 "is_configured": true, 00:12:21.322 "data_offset": 2048, 00:12:21.322 "data_size": 63488 00:12:21.322 } 00:12:21.322 ] 00:12:21.322 }' 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.322 09:46:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.579 [2024-10-30 09:47:00.118752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.579 "name": "Existed_Raid", 00:12:21.579 "aliases": [ 00:12:21.579 "73e6550a-64bf-4bb5-87dd-2502a0a011e5" 00:12:21.579 ], 00:12:21.579 "product_name": "Raid Volume", 00:12:21.579 "block_size": 512, 00:12:21.579 "num_blocks": 126976, 00:12:21.579 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:21.579 "assigned_rate_limits": { 00:12:21.579 "rw_ios_per_sec": 0, 00:12:21.579 "rw_mbytes_per_sec": 0, 00:12:21.579 "r_mbytes_per_sec": 0, 00:12:21.579 "w_mbytes_per_sec": 0 00:12:21.579 }, 00:12:21.579 "claimed": false, 00:12:21.579 "zoned": false, 00:12:21.579 "supported_io_types": { 00:12:21.579 "read": true, 00:12:21.579 "write": true, 00:12:21.579 "unmap": false, 00:12:21.579 "flush": false, 00:12:21.579 "reset": true, 00:12:21.579 "nvme_admin": false, 00:12:21.579 "nvme_io": false, 00:12:21.579 "nvme_io_md": false, 00:12:21.579 "write_zeroes": true, 00:12:21.579 "zcopy": false, 00:12:21.579 "get_zone_info": false, 00:12:21.579 "zone_management": false, 00:12:21.579 "zone_append": false, 00:12:21.579 "compare": false, 00:12:21.579 "compare_and_write": false, 00:12:21.579 "abort": false, 00:12:21.579 "seek_hole": false, 00:12:21.579 "seek_data": false, 00:12:21.579 "copy": false, 00:12:21.579 "nvme_iov_md": false 00:12:21.579 }, 00:12:21.579 "driver_specific": { 00:12:21.579 "raid": { 00:12:21.579 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:21.579 "strip_size_kb": 64, 00:12:21.579 "state": "online", 00:12:21.579 "raid_level": "raid5f", 00:12:21.579 "superblock": true, 00:12:21.579 "num_base_bdevs": 3, 00:12:21.579 "num_base_bdevs_discovered": 3, 00:12:21.579 "num_base_bdevs_operational": 3, 00:12:21.579 "base_bdevs_list": [ 00:12:21.579 { 00:12:21.579 "name": "BaseBdev1", 00:12:21.579 "uuid": "899a1b6c-42f0-481c-adcd-4e65589685d8", 00:12:21.579 "is_configured": true, 00:12:21.579 "data_offset": 2048, 00:12:21.579 "data_size": 63488 00:12:21.579 }, 00:12:21.579 { 00:12:21.579 "name": "BaseBdev2", 00:12:21.579 "uuid": "94215b2c-8068-4c5b-bcae-7b819ba06246", 00:12:21.579 "is_configured": true, 00:12:21.579 "data_offset": 2048, 00:12:21.579 "data_size": 63488 00:12:21.579 }, 00:12:21.579 { 00:12:21.579 "name": "BaseBdev3", 00:12:21.579 "uuid": "50f97276-8fac-4dea-a197-1e31fc558f00", 00:12:21.579 "is_configured": true, 00:12:21.579 "data_offset": 2048, 00:12:21.579 "data_size": 63488 00:12:21.579 } 00:12:21.579 ] 00:12:21.579 } 00:12:21.579 } 00:12:21.579 }' 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:21.579 BaseBdev2 00:12:21.579 BaseBdev3' 00:12:21.579 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.836 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 [2024-10-30 09:47:00.322655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.837 "name": "Existed_Raid", 00:12:21.837 "uuid": "73e6550a-64bf-4bb5-87dd-2502a0a011e5", 00:12:21.837 "strip_size_kb": 64, 00:12:21.837 "state": "online", 00:12:21.837 "raid_level": "raid5f", 00:12:21.837 "superblock": true, 00:12:21.837 "num_base_bdevs": 3, 00:12:21.837 "num_base_bdevs_discovered": 2, 00:12:21.837 "num_base_bdevs_operational": 2, 00:12:21.837 "base_bdevs_list": [ 00:12:21.837 { 00:12:21.837 "name": null, 00:12:21.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.837 "is_configured": false, 00:12:21.837 "data_offset": 0, 00:12:21.837 "data_size": 63488 00:12:21.837 }, 00:12:21.837 { 00:12:21.837 "name": "BaseBdev2", 00:12:21.837 "uuid": "94215b2c-8068-4c5b-bcae-7b819ba06246", 00:12:21.837 "is_configured": true, 00:12:21.837 "data_offset": 2048, 00:12:21.837 "data_size": 63488 00:12:21.837 }, 00:12:21.837 { 00:12:21.837 "name": "BaseBdev3", 00:12:21.837 "uuid": "50f97276-8fac-4dea-a197-1e31fc558f00", 00:12:21.837 "is_configured": true, 00:12:21.837 "data_offset": 2048, 00:12:21.837 "data_size": 63488 00:12:21.837 } 00:12:21.837 ] 00:12:21.837 }' 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.837 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.095 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.096 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 [2024-10-30 09:47:00.721400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.354 [2024-10-30 09:47:00.721509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.354 [2024-10-30 09:47:00.769269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 [2024-10-30 09:47:00.813322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.354 [2024-10-30 09:47:00.813430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 BaseBdev2 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.354 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.355 [ 00:12:22.355 { 00:12:22.355 "name": "BaseBdev2", 00:12:22.355 "aliases": [ 00:12:22.355 "20254b6f-2feb-47eb-a18c-6f11d6d1042d" 00:12:22.355 ], 00:12:22.355 "product_name": "Malloc disk", 00:12:22.355 "block_size": 512, 00:12:22.355 "num_blocks": 65536, 00:12:22.355 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:22.355 "assigned_rate_limits": { 00:12:22.355 "rw_ios_per_sec": 0, 00:12:22.355 "rw_mbytes_per_sec": 0, 00:12:22.355 "r_mbytes_per_sec": 0, 00:12:22.355 "w_mbytes_per_sec": 0 00:12:22.355 }, 00:12:22.355 "claimed": false, 00:12:22.355 "zoned": false, 00:12:22.355 "supported_io_types": { 00:12:22.355 "read": true, 00:12:22.355 "write": true, 00:12:22.355 "unmap": true, 00:12:22.355 "flush": true, 00:12:22.355 "reset": true, 00:12:22.355 "nvme_admin": false, 00:12:22.355 "nvme_io": false, 00:12:22.355 "nvme_io_md": false, 00:12:22.355 "write_zeroes": true, 00:12:22.355 "zcopy": true, 00:12:22.355 "get_zone_info": false, 00:12:22.355 "zone_management": false, 00:12:22.355 "zone_append": false, 00:12:22.355 "compare": false, 00:12:22.355 "compare_and_write": false, 00:12:22.355 "abort": true, 00:12:22.355 "seek_hole": false, 00:12:22.355 "seek_data": false, 00:12:22.355 "copy": true, 00:12:22.355 "nvme_iov_md": false 00:12:22.355 }, 00:12:22.355 "memory_domains": [ 00:12:22.355 { 00:12:22.355 "dma_device_id": "system", 00:12:22.355 "dma_device_type": 1 00:12:22.355 }, 00:12:22.355 { 00:12:22.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.355 "dma_device_type": 2 00:12:22.355 } 00:12:22.355 ], 00:12:22.355 "driver_specific": {} 00:12:22.355 } 00:12:22.355 ] 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.355 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.614 BaseBdev3 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.614 09:47:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.614 [ 00:12:22.614 { 00:12:22.614 "name": "BaseBdev3", 00:12:22.614 "aliases": [ 00:12:22.614 "c9461d6b-0bcb-4659-86cb-97cc59370124" 00:12:22.614 ], 00:12:22.614 "product_name": "Malloc disk", 00:12:22.614 "block_size": 512, 00:12:22.614 "num_blocks": 65536, 00:12:22.614 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:22.614 "assigned_rate_limits": { 00:12:22.614 "rw_ios_per_sec": 0, 00:12:22.614 "rw_mbytes_per_sec": 0, 00:12:22.614 "r_mbytes_per_sec": 0, 00:12:22.614 "w_mbytes_per_sec": 0 00:12:22.614 }, 00:12:22.614 "claimed": false, 00:12:22.614 "zoned": false, 00:12:22.614 "supported_io_types": { 00:12:22.614 "read": true, 00:12:22.614 "write": true, 00:12:22.614 "unmap": true, 00:12:22.614 "flush": true, 00:12:22.614 "reset": true, 00:12:22.614 "nvme_admin": false, 00:12:22.614 "nvme_io": false, 00:12:22.614 "nvme_io_md": false, 00:12:22.614 "write_zeroes": true, 00:12:22.614 "zcopy": true, 00:12:22.614 "get_zone_info": false, 00:12:22.614 "zone_management": false, 00:12:22.614 "zone_append": false, 00:12:22.614 "compare": false, 00:12:22.614 "compare_and_write": false, 00:12:22.614 "abort": true, 00:12:22.614 "seek_hole": false, 00:12:22.614 "seek_data": false, 00:12:22.614 "copy": true, 00:12:22.614 "nvme_iov_md": false 00:12:22.614 }, 00:12:22.614 "memory_domains": [ 00:12:22.614 { 00:12:22.614 "dma_device_id": "system", 00:12:22.614 "dma_device_type": 1 00:12:22.614 }, 00:12:22.614 { 00:12:22.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.614 "dma_device_type": 2 00:12:22.614 } 00:12:22.614 ], 00:12:22.614 "driver_specific": {} 00:12:22.614 } 00:12:22.614 ] 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.614 [2024-10-30 09:47:01.008485] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.614 [2024-10-30 09:47:01.008602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.614 [2024-10-30 09:47:01.008665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.614 [2024-10-30 09:47:01.010270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.614 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.615 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.615 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.615 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.615 "name": "Existed_Raid", 00:12:22.615 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:22.615 "strip_size_kb": 64, 00:12:22.615 "state": "configuring", 00:12:22.615 "raid_level": "raid5f", 00:12:22.615 "superblock": true, 00:12:22.615 "num_base_bdevs": 3, 00:12:22.615 "num_base_bdevs_discovered": 2, 00:12:22.615 "num_base_bdevs_operational": 3, 00:12:22.615 "base_bdevs_list": [ 00:12:22.615 { 00:12:22.615 "name": "BaseBdev1", 00:12:22.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.615 "is_configured": false, 00:12:22.615 "data_offset": 0, 00:12:22.615 "data_size": 0 00:12:22.615 }, 00:12:22.615 { 00:12:22.615 "name": "BaseBdev2", 00:12:22.615 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:22.615 "is_configured": true, 00:12:22.615 "data_offset": 2048, 00:12:22.615 "data_size": 63488 00:12:22.615 }, 00:12:22.615 { 00:12:22.615 "name": "BaseBdev3", 00:12:22.615 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:22.615 "is_configured": true, 00:12:22.615 "data_offset": 2048, 00:12:22.615 "data_size": 63488 00:12:22.615 } 00:12:22.615 ] 00:12:22.615 }' 00:12:22.615 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.615 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.873 [2024-10-30 09:47:01.352538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.873 "name": "Existed_Raid", 00:12:22.873 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:22.873 "strip_size_kb": 64, 00:12:22.873 "state": "configuring", 00:12:22.873 "raid_level": "raid5f", 00:12:22.873 "superblock": true, 00:12:22.873 "num_base_bdevs": 3, 00:12:22.873 "num_base_bdevs_discovered": 1, 00:12:22.873 "num_base_bdevs_operational": 3, 00:12:22.873 "base_bdevs_list": [ 00:12:22.873 { 00:12:22.873 "name": "BaseBdev1", 00:12:22.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.873 "is_configured": false, 00:12:22.873 "data_offset": 0, 00:12:22.873 "data_size": 0 00:12:22.873 }, 00:12:22.873 { 00:12:22.873 "name": null, 00:12:22.873 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:22.873 "is_configured": false, 00:12:22.873 "data_offset": 0, 00:12:22.873 "data_size": 63488 00:12:22.873 }, 00:12:22.873 { 00:12:22.873 "name": "BaseBdev3", 00:12:22.873 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:22.873 "is_configured": true, 00:12:22.873 "data_offset": 2048, 00:12:22.873 "data_size": 63488 00:12:22.873 } 00:12:22.873 ] 00:12:22.873 }' 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.873 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.131 [2024-10-30 09:47:01.714497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.131 BaseBdev1 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.131 [ 00:12:23.131 { 00:12:23.131 "name": "BaseBdev1", 00:12:23.131 "aliases": [ 00:12:23.131 "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1" 00:12:23.131 ], 00:12:23.131 "product_name": "Malloc disk", 00:12:23.131 "block_size": 512, 00:12:23.131 "num_blocks": 65536, 00:12:23.131 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:23.131 "assigned_rate_limits": { 00:12:23.131 "rw_ios_per_sec": 0, 00:12:23.131 "rw_mbytes_per_sec": 0, 00:12:23.131 "r_mbytes_per_sec": 0, 00:12:23.131 "w_mbytes_per_sec": 0 00:12:23.131 }, 00:12:23.131 "claimed": true, 00:12:23.131 "claim_type": "exclusive_write", 00:12:23.131 "zoned": false, 00:12:23.131 "supported_io_types": { 00:12:23.131 "read": true, 00:12:23.131 "write": true, 00:12:23.131 "unmap": true, 00:12:23.131 "flush": true, 00:12:23.131 "reset": true, 00:12:23.131 "nvme_admin": false, 00:12:23.131 "nvme_io": false, 00:12:23.131 "nvme_io_md": false, 00:12:23.131 "write_zeroes": true, 00:12:23.131 "zcopy": true, 00:12:23.131 "get_zone_info": false, 00:12:23.131 "zone_management": false, 00:12:23.131 "zone_append": false, 00:12:23.131 "compare": false, 00:12:23.131 "compare_and_write": false, 00:12:23.131 "abort": true, 00:12:23.131 "seek_hole": false, 00:12:23.131 "seek_data": false, 00:12:23.131 "copy": true, 00:12:23.131 "nvme_iov_md": false 00:12:23.131 }, 00:12:23.131 "memory_domains": [ 00:12:23.131 { 00:12:23.131 "dma_device_id": "system", 00:12:23.131 "dma_device_type": 1 00:12:23.131 }, 00:12:23.131 { 00:12:23.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.131 "dma_device_type": 2 00:12:23.131 } 00:12:23.131 ], 00:12:23.131 "driver_specific": {} 00:12:23.131 } 00:12:23.131 ] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.131 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.389 "name": "Existed_Raid", 00:12:23.389 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:23.389 "strip_size_kb": 64, 00:12:23.389 "state": "configuring", 00:12:23.389 "raid_level": "raid5f", 00:12:23.389 "superblock": true, 00:12:23.389 "num_base_bdevs": 3, 00:12:23.389 "num_base_bdevs_discovered": 2, 00:12:23.389 "num_base_bdevs_operational": 3, 00:12:23.389 "base_bdevs_list": [ 00:12:23.389 { 00:12:23.389 "name": "BaseBdev1", 00:12:23.389 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:23.389 "is_configured": true, 00:12:23.389 "data_offset": 2048, 00:12:23.389 "data_size": 63488 00:12:23.389 }, 00:12:23.389 { 00:12:23.389 "name": null, 00:12:23.389 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:23.389 "is_configured": false, 00:12:23.389 "data_offset": 0, 00:12:23.389 "data_size": 63488 00:12:23.389 }, 00:12:23.389 { 00:12:23.389 "name": "BaseBdev3", 00:12:23.389 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:23.389 "is_configured": true, 00:12:23.389 "data_offset": 2048, 00:12:23.389 "data_size": 63488 00:12:23.389 } 00:12:23.389 ] 00:12:23.389 }' 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.389 09:47:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.647 [2024-10-30 09:47:02.110615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.647 "name": "Existed_Raid", 00:12:23.647 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:23.647 "strip_size_kb": 64, 00:12:23.647 "state": "configuring", 00:12:23.647 "raid_level": "raid5f", 00:12:23.647 "superblock": true, 00:12:23.647 "num_base_bdevs": 3, 00:12:23.647 "num_base_bdevs_discovered": 1, 00:12:23.647 "num_base_bdevs_operational": 3, 00:12:23.647 "base_bdevs_list": [ 00:12:23.647 { 00:12:23.647 "name": "BaseBdev1", 00:12:23.647 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:23.647 "is_configured": true, 00:12:23.647 "data_offset": 2048, 00:12:23.647 "data_size": 63488 00:12:23.647 }, 00:12:23.647 { 00:12:23.647 "name": null, 00:12:23.647 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:23.647 "is_configured": false, 00:12:23.647 "data_offset": 0, 00:12:23.647 "data_size": 63488 00:12:23.647 }, 00:12:23.647 { 00:12:23.647 "name": null, 00:12:23.647 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:23.647 "is_configured": false, 00:12:23.647 "data_offset": 0, 00:12:23.647 "data_size": 63488 00:12:23.647 } 00:12:23.647 ] 00:12:23.647 }' 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.647 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.905 [2024-10-30 09:47:02.454721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.905 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.906 "name": "Existed_Raid", 00:12:23.906 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:23.906 "strip_size_kb": 64, 00:12:23.906 "state": "configuring", 00:12:23.906 "raid_level": "raid5f", 00:12:23.906 "superblock": true, 00:12:23.906 "num_base_bdevs": 3, 00:12:23.906 "num_base_bdevs_discovered": 2, 00:12:23.906 "num_base_bdevs_operational": 3, 00:12:23.906 "base_bdevs_list": [ 00:12:23.906 { 00:12:23.906 "name": "BaseBdev1", 00:12:23.906 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:23.906 "is_configured": true, 00:12:23.906 "data_offset": 2048, 00:12:23.906 "data_size": 63488 00:12:23.906 }, 00:12:23.906 { 00:12:23.906 "name": null, 00:12:23.906 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:23.906 "is_configured": false, 00:12:23.906 "data_offset": 0, 00:12:23.906 "data_size": 63488 00:12:23.906 }, 00:12:23.906 { 00:12:23.906 "name": "BaseBdev3", 00:12:23.906 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:23.906 "is_configured": true, 00:12:23.906 "data_offset": 2048, 00:12:23.906 "data_size": 63488 00:12:23.906 } 00:12:23.906 ] 00:12:23.906 }' 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.906 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.163 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.163 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.163 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.163 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.163 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.421 [2024-10-30 09:47:02.802775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.421 "name": "Existed_Raid", 00:12:24.421 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:24.421 "strip_size_kb": 64, 00:12:24.421 "state": "configuring", 00:12:24.421 "raid_level": "raid5f", 00:12:24.421 "superblock": true, 00:12:24.421 "num_base_bdevs": 3, 00:12:24.421 "num_base_bdevs_discovered": 1, 00:12:24.421 "num_base_bdevs_operational": 3, 00:12:24.421 "base_bdevs_list": [ 00:12:24.421 { 00:12:24.421 "name": null, 00:12:24.421 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:24.421 "is_configured": false, 00:12:24.421 "data_offset": 0, 00:12:24.421 "data_size": 63488 00:12:24.421 }, 00:12:24.421 { 00:12:24.421 "name": null, 00:12:24.421 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:24.421 "is_configured": false, 00:12:24.421 "data_offset": 0, 00:12:24.421 "data_size": 63488 00:12:24.421 }, 00:12:24.421 { 00:12:24.421 "name": "BaseBdev3", 00:12:24.421 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:24.421 "is_configured": true, 00:12:24.421 "data_offset": 2048, 00:12:24.421 "data_size": 63488 00:12:24.421 } 00:12:24.421 ] 00:12:24.421 }' 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.421 09:47:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.679 [2024-10-30 09:47:03.209274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.679 "name": "Existed_Raid", 00:12:24.679 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:24.679 "strip_size_kb": 64, 00:12:24.679 "state": "configuring", 00:12:24.679 "raid_level": "raid5f", 00:12:24.679 "superblock": true, 00:12:24.679 "num_base_bdevs": 3, 00:12:24.679 "num_base_bdevs_discovered": 2, 00:12:24.679 "num_base_bdevs_operational": 3, 00:12:24.679 "base_bdevs_list": [ 00:12:24.679 { 00:12:24.679 "name": null, 00:12:24.679 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:24.679 "is_configured": false, 00:12:24.679 "data_offset": 0, 00:12:24.679 "data_size": 63488 00:12:24.679 }, 00:12:24.679 { 00:12:24.679 "name": "BaseBdev2", 00:12:24.679 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:24.679 "is_configured": true, 00:12:24.679 "data_offset": 2048, 00:12:24.679 "data_size": 63488 00:12:24.679 }, 00:12:24.679 { 00:12:24.679 "name": "BaseBdev3", 00:12:24.679 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:24.679 "is_configured": true, 00:12:24.679 "data_offset": 2048, 00:12:24.679 "data_size": 63488 00:12:24.679 } 00:12:24.679 ] 00:12:24.679 }' 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.679 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.244 [2024-10-30 09:47:03.671521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:25.244 [2024-10-30 09:47:03.671693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.244 [2024-10-30 09:47:03.671705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:25.244 NewBaseBdev 00:12:25.244 [2024-10-30 09:47:03.671902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:25.244 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 [2024-10-30 09:47:03.674822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.245 [2024-10-30 09:47:03.674838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:25.245 [2024-10-30 09:47:03.674944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 [ 00:12:25.245 { 00:12:25.245 "name": "NewBaseBdev", 00:12:25.245 "aliases": [ 00:12:25.245 "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1" 00:12:25.245 ], 00:12:25.245 "product_name": "Malloc disk", 00:12:25.245 "block_size": 512, 00:12:25.245 "num_blocks": 65536, 00:12:25.245 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:25.245 "assigned_rate_limits": { 00:12:25.245 "rw_ios_per_sec": 0, 00:12:25.245 "rw_mbytes_per_sec": 0, 00:12:25.245 "r_mbytes_per_sec": 0, 00:12:25.245 "w_mbytes_per_sec": 0 00:12:25.245 }, 00:12:25.245 "claimed": true, 00:12:25.245 "claim_type": "exclusive_write", 00:12:25.245 "zoned": false, 00:12:25.245 "supported_io_types": { 00:12:25.245 "read": true, 00:12:25.245 "write": true, 00:12:25.245 "unmap": true, 00:12:25.245 "flush": true, 00:12:25.245 "reset": true, 00:12:25.245 "nvme_admin": false, 00:12:25.245 "nvme_io": false, 00:12:25.245 "nvme_io_md": false, 00:12:25.245 "write_zeroes": true, 00:12:25.245 "zcopy": true, 00:12:25.245 "get_zone_info": false, 00:12:25.245 "zone_management": false, 00:12:25.245 "zone_append": false, 00:12:25.245 "compare": false, 00:12:25.245 "compare_and_write": false, 00:12:25.245 "abort": true, 00:12:25.245 "seek_hole": false, 00:12:25.245 "seek_data": false, 00:12:25.245 "copy": true, 00:12:25.245 "nvme_iov_md": false 00:12:25.245 }, 00:12:25.245 "memory_domains": [ 00:12:25.245 { 00:12:25.245 "dma_device_id": "system", 00:12:25.245 "dma_device_type": 1 00:12:25.245 }, 00:12:25.245 { 00:12:25.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.245 "dma_device_type": 2 00:12:25.245 } 00:12:25.245 ], 00:12:25.245 "driver_specific": {} 00:12:25.245 } 00:12:25.245 ] 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.245 "name": "Existed_Raid", 00:12:25.245 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:25.245 "strip_size_kb": 64, 00:12:25.245 "state": "online", 00:12:25.245 "raid_level": "raid5f", 00:12:25.245 "superblock": true, 00:12:25.245 "num_base_bdevs": 3, 00:12:25.245 "num_base_bdevs_discovered": 3, 00:12:25.245 "num_base_bdevs_operational": 3, 00:12:25.245 "base_bdevs_list": [ 00:12:25.245 { 00:12:25.245 "name": "NewBaseBdev", 00:12:25.245 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:25.245 "is_configured": true, 00:12:25.245 "data_offset": 2048, 00:12:25.245 "data_size": 63488 00:12:25.245 }, 00:12:25.245 { 00:12:25.245 "name": "BaseBdev2", 00:12:25.245 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:25.245 "is_configured": true, 00:12:25.245 "data_offset": 2048, 00:12:25.245 "data_size": 63488 00:12:25.245 }, 00:12:25.245 { 00:12:25.245 "name": "BaseBdev3", 00:12:25.245 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:25.245 "is_configured": true, 00:12:25.245 "data_offset": 2048, 00:12:25.245 "data_size": 63488 00:12:25.245 } 00:12:25.245 ] 00:12:25.245 }' 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.245 09:47:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.503 [2024-10-30 09:47:04.038452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.503 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.503 "name": "Existed_Raid", 00:12:25.503 "aliases": [ 00:12:25.503 "8976aa1a-68c8-4f2d-b664-acad592b3284" 00:12:25.503 ], 00:12:25.503 "product_name": "Raid Volume", 00:12:25.503 "block_size": 512, 00:12:25.503 "num_blocks": 126976, 00:12:25.503 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:25.503 "assigned_rate_limits": { 00:12:25.503 "rw_ios_per_sec": 0, 00:12:25.503 "rw_mbytes_per_sec": 0, 00:12:25.503 "r_mbytes_per_sec": 0, 00:12:25.503 "w_mbytes_per_sec": 0 00:12:25.503 }, 00:12:25.503 "claimed": false, 00:12:25.503 "zoned": false, 00:12:25.503 "supported_io_types": { 00:12:25.503 "read": true, 00:12:25.503 "write": true, 00:12:25.503 "unmap": false, 00:12:25.503 "flush": false, 00:12:25.503 "reset": true, 00:12:25.503 "nvme_admin": false, 00:12:25.503 "nvme_io": false, 00:12:25.503 "nvme_io_md": false, 00:12:25.503 "write_zeroes": true, 00:12:25.503 "zcopy": false, 00:12:25.503 "get_zone_info": false, 00:12:25.503 "zone_management": false, 00:12:25.503 "zone_append": false, 00:12:25.503 "compare": false, 00:12:25.503 "compare_and_write": false, 00:12:25.503 "abort": false, 00:12:25.503 "seek_hole": false, 00:12:25.503 "seek_data": false, 00:12:25.504 "copy": false, 00:12:25.504 "nvme_iov_md": false 00:12:25.504 }, 00:12:25.504 "driver_specific": { 00:12:25.504 "raid": { 00:12:25.504 "uuid": "8976aa1a-68c8-4f2d-b664-acad592b3284", 00:12:25.504 "strip_size_kb": 64, 00:12:25.504 "state": "online", 00:12:25.504 "raid_level": "raid5f", 00:12:25.504 "superblock": true, 00:12:25.504 "num_base_bdevs": 3, 00:12:25.504 "num_base_bdevs_discovered": 3, 00:12:25.504 "num_base_bdevs_operational": 3, 00:12:25.504 "base_bdevs_list": [ 00:12:25.504 { 00:12:25.504 "name": "NewBaseBdev", 00:12:25.504 "uuid": "5ce2b5aa-6d97-4d79-8922-1583a0ccbcd1", 00:12:25.504 "is_configured": true, 00:12:25.504 "data_offset": 2048, 00:12:25.504 "data_size": 63488 00:12:25.504 }, 00:12:25.504 { 00:12:25.504 "name": "BaseBdev2", 00:12:25.504 "uuid": "20254b6f-2feb-47eb-a18c-6f11d6d1042d", 00:12:25.504 "is_configured": true, 00:12:25.504 "data_offset": 2048, 00:12:25.504 "data_size": 63488 00:12:25.504 }, 00:12:25.504 { 00:12:25.504 "name": "BaseBdev3", 00:12:25.504 "uuid": "c9461d6b-0bcb-4659-86cb-97cc59370124", 00:12:25.504 "is_configured": true, 00:12:25.504 "data_offset": 2048, 00:12:25.504 "data_size": 63488 00:12:25.504 } 00:12:25.504 ] 00:12:25.504 } 00:12:25.504 } 00:12:25.504 }' 00:12:25.504 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.504 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:25.504 BaseBdev2 00:12:25.504 BaseBdev3' 00:12:25.504 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.504 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.504 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.761 [2024-10-30 09:47:04.226316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:25.761 [2024-10-30 09:47:04.226337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.761 [2024-10-30 09:47:04.226396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.761 [2024-10-30 09:47:04.226620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.761 [2024-10-30 09:47:04.226631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78333 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78333 ']' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78333 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78333 00:12:25.761 killing process with pid 78333 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78333' 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78333 00:12:25.761 [2024-10-30 09:47:04.255965] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.761 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78333 00:12:26.020 [2024-10-30 09:47:04.401459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.587 09:47:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:26.587 00:12:26.587 real 0m7.437s 00:12:26.587 user 0m12.074s 00:12:26.587 sys 0m1.213s 00:12:26.587 ************************************ 00:12:26.587 END TEST raid5f_state_function_test_sb 00:12:26.587 ************************************ 00:12:26.587 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.587 09:47:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.587 09:47:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:12:26.587 09:47:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:26.587 09:47:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.587 09:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.587 ************************************ 00:12:26.587 START TEST raid5f_superblock_test 00:12:26.587 ************************************ 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78925 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78925 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 78925 ']' 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:26.587 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.587 [2024-10-30 09:47:05.079646] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:12:26.587 [2024-10-30 09:47:05.079913] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78925 ] 00:12:26.844 [2024-10-30 09:47:05.235539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.844 [2024-10-30 09:47:05.316494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.844 [2024-10-30 09:47:05.422309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.844 [2024-10-30 09:47:05.422448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 malloc1 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 [2024-10-30 09:47:05.952473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.410 [2024-10-30 09:47:05.952680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.410 [2024-10-30 09:47:05.952701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:27.410 [2024-10-30 09:47:05.952708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.410 [2024-10-30 09:47:05.954452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.410 [2024-10-30 09:47:05.954482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.410 pt1 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 malloc2 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 [2024-10-30 09:47:05.987185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.410 [2024-10-30 09:47:05.987295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.410 [2024-10-30 09:47:05.987315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:27.410 [2024-10-30 09:47:05.987323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.410 [2024-10-30 09:47:05.988947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.410 [2024-10-30 09:47:05.988972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.410 pt2 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 malloc3 00:12:27.410 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.410 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.410 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.410 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 [2024-10-30 09:47:06.030649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.668 [2024-10-30 09:47:06.030769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.668 [2024-10-30 09:47:06.030791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:27.668 [2024-10-30 09:47:06.030799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.669 [2024-10-30 09:47:06.032463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.669 [2024-10-30 09:47:06.032487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.669 pt3 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.669 [2024-10-30 09:47:06.038692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.669 [2024-10-30 09:47:06.040141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.669 [2024-10-30 09:47:06.040186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.669 [2024-10-30 09:47:06.040309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:27.669 [2024-10-30 09:47:06.040322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:27.669 [2024-10-30 09:47:06.040510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:27.669 [2024-10-30 09:47:06.043445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:27.669 [2024-10-30 09:47:06.043460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:27.669 [2024-10-30 09:47:06.043593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.669 "name": "raid_bdev1", 00:12:27.669 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:27.669 "strip_size_kb": 64, 00:12:27.669 "state": "online", 00:12:27.669 "raid_level": "raid5f", 00:12:27.669 "superblock": true, 00:12:27.669 "num_base_bdevs": 3, 00:12:27.669 "num_base_bdevs_discovered": 3, 00:12:27.669 "num_base_bdevs_operational": 3, 00:12:27.669 "base_bdevs_list": [ 00:12:27.669 { 00:12:27.669 "name": "pt1", 00:12:27.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.669 "is_configured": true, 00:12:27.669 "data_offset": 2048, 00:12:27.669 "data_size": 63488 00:12:27.669 }, 00:12:27.669 { 00:12:27.669 "name": "pt2", 00:12:27.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.669 "is_configured": true, 00:12:27.669 "data_offset": 2048, 00:12:27.669 "data_size": 63488 00:12:27.669 }, 00:12:27.669 { 00:12:27.669 "name": "pt3", 00:12:27.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.669 "is_configured": true, 00:12:27.669 "data_offset": 2048, 00:12:27.669 "data_size": 63488 00:12:27.669 } 00:12:27.669 ] 00:12:27.669 }' 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.669 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.927 [2024-10-30 09:47:06.379820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.927 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.927 "name": "raid_bdev1", 00:12:27.927 "aliases": [ 00:12:27.927 "46bec6c7-7687-4662-8415-6a74c9e2483a" 00:12:27.927 ], 00:12:27.927 "product_name": "Raid Volume", 00:12:27.927 "block_size": 512, 00:12:27.927 "num_blocks": 126976, 00:12:27.927 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:27.927 "assigned_rate_limits": { 00:12:27.927 "rw_ios_per_sec": 0, 00:12:27.927 "rw_mbytes_per_sec": 0, 00:12:27.927 "r_mbytes_per_sec": 0, 00:12:27.927 "w_mbytes_per_sec": 0 00:12:27.927 }, 00:12:27.927 "claimed": false, 00:12:27.927 "zoned": false, 00:12:27.927 "supported_io_types": { 00:12:27.927 "read": true, 00:12:27.927 "write": true, 00:12:27.927 "unmap": false, 00:12:27.927 "flush": false, 00:12:27.927 "reset": true, 00:12:27.927 "nvme_admin": false, 00:12:27.927 "nvme_io": false, 00:12:27.927 "nvme_io_md": false, 00:12:27.927 "write_zeroes": true, 00:12:27.927 "zcopy": false, 00:12:27.927 "get_zone_info": false, 00:12:27.927 "zone_management": false, 00:12:27.927 "zone_append": false, 00:12:27.927 "compare": false, 00:12:27.927 "compare_and_write": false, 00:12:27.927 "abort": false, 00:12:27.927 "seek_hole": false, 00:12:27.927 "seek_data": false, 00:12:27.927 "copy": false, 00:12:27.927 "nvme_iov_md": false 00:12:27.927 }, 00:12:27.927 "driver_specific": { 00:12:27.927 "raid": { 00:12:27.927 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:27.927 "strip_size_kb": 64, 00:12:27.927 "state": "online", 00:12:27.928 "raid_level": "raid5f", 00:12:27.928 "superblock": true, 00:12:27.928 "num_base_bdevs": 3, 00:12:27.928 "num_base_bdevs_discovered": 3, 00:12:27.928 "num_base_bdevs_operational": 3, 00:12:27.928 "base_bdevs_list": [ 00:12:27.928 { 00:12:27.928 "name": "pt1", 00:12:27.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.928 "is_configured": true, 00:12:27.928 "data_offset": 2048, 00:12:27.928 "data_size": 63488 00:12:27.928 }, 00:12:27.928 { 00:12:27.928 "name": "pt2", 00:12:27.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.928 "is_configured": true, 00:12:27.928 "data_offset": 2048, 00:12:27.928 "data_size": 63488 00:12:27.928 }, 00:12:27.928 { 00:12:27.928 "name": "pt3", 00:12:27.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.928 "is_configured": true, 00:12:27.928 "data_offset": 2048, 00:12:27.928 "data_size": 63488 00:12:27.928 } 00:12:27.928 ] 00:12:27.928 } 00:12:27.928 } 00:12:27.928 }' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.928 pt2 00:12:27.928 pt3' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.928 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:28.186 [2024-10-30 09:47:06.575804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46bec6c7-7687-4662-8415-6a74c9e2483a 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 46bec6c7-7687-4662-8415-6a74c9e2483a ']' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 [2024-10-30 09:47:06.599650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.186 [2024-10-30 09:47:06.599668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.186 [2024-10-30 09:47:06.599717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.186 [2024-10-30 09:47:06.599778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.186 [2024-10-30 09:47:06.599786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.186 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.187 [2024-10-30 09:47:06.707711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:28.187 [2024-10-30 09:47:06.709237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:28.187 [2024-10-30 09:47:06.709362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:28.187 [2024-10-30 09:47:06.709408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:28.187 [2024-10-30 09:47:06.709445] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:28.187 [2024-10-30 09:47:06.709460] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:28.187 [2024-10-30 09:47:06.709473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.187 [2024-10-30 09:47:06.709481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:28.187 request: 00:12:28.187 { 00:12:28.187 "name": "raid_bdev1", 00:12:28.187 "raid_level": "raid5f", 00:12:28.187 "base_bdevs": [ 00:12:28.187 "malloc1", 00:12:28.187 "malloc2", 00:12:28.187 "malloc3" 00:12:28.187 ], 00:12:28.187 "strip_size_kb": 64, 00:12:28.187 "superblock": false, 00:12:28.187 "method": "bdev_raid_create", 00:12:28.187 "req_id": 1 00:12:28.187 } 00:12:28.187 Got JSON-RPC error response 00:12:28.187 response: 00:12:28.187 { 00:12:28.187 "code": -17, 00:12:28.187 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:28.187 } 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.187 [2024-10-30 09:47:06.751683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:28.187 [2024-10-30 09:47:06.751718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.187 [2024-10-30 09:47:06.751732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:28.187 [2024-10-30 09:47:06.751739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.187 [2024-10-30 09:47:06.753496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.187 [2024-10-30 09:47:06.753524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.187 [2024-10-30 09:47:06.753578] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:28.187 [2024-10-30 09:47:06.753611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.187 pt1 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.187 "name": "raid_bdev1", 00:12:28.187 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:28.187 "strip_size_kb": 64, 00:12:28.187 "state": "configuring", 00:12:28.187 "raid_level": "raid5f", 00:12:28.187 "superblock": true, 00:12:28.187 "num_base_bdevs": 3, 00:12:28.187 "num_base_bdevs_discovered": 1, 00:12:28.187 "num_base_bdevs_operational": 3, 00:12:28.187 "base_bdevs_list": [ 00:12:28.187 { 00:12:28.187 "name": "pt1", 00:12:28.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.187 "is_configured": true, 00:12:28.187 "data_offset": 2048, 00:12:28.187 "data_size": 63488 00:12:28.187 }, 00:12:28.187 { 00:12:28.187 "name": null, 00:12:28.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.187 "is_configured": false, 00:12:28.187 "data_offset": 2048, 00:12:28.187 "data_size": 63488 00:12:28.187 }, 00:12:28.187 { 00:12:28.187 "name": null, 00:12:28.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.187 "is_configured": false, 00:12:28.187 "data_offset": 2048, 00:12:28.187 "data_size": 63488 00:12:28.187 } 00:12:28.187 ] 00:12:28.187 }' 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.187 09:47:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:28.445 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.445 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.445 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 [2024-10-30 09:47:07.059762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.445 [2024-10-30 09:47:07.059810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.445 [2024-10-30 09:47:07.059828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:28.445 [2024-10-30 09:47:07.059836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.445 [2024-10-30 09:47:07.060172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.445 [2024-10-30 09:47:07.060187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.445 [2024-10-30 09:47:07.060247] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.445 [2024-10-30 09:47:07.060262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.702 pt2 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.702 [2024-10-30 09:47:07.067764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.702 "name": "raid_bdev1", 00:12:28.702 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:28.702 "strip_size_kb": 64, 00:12:28.702 "state": "configuring", 00:12:28.702 "raid_level": "raid5f", 00:12:28.702 "superblock": true, 00:12:28.702 "num_base_bdevs": 3, 00:12:28.702 "num_base_bdevs_discovered": 1, 00:12:28.702 "num_base_bdevs_operational": 3, 00:12:28.702 "base_bdevs_list": [ 00:12:28.702 { 00:12:28.702 "name": "pt1", 00:12:28.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.702 "is_configured": true, 00:12:28.702 "data_offset": 2048, 00:12:28.702 "data_size": 63488 00:12:28.702 }, 00:12:28.702 { 00:12:28.702 "name": null, 00:12:28.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.702 "is_configured": false, 00:12:28.702 "data_offset": 0, 00:12:28.702 "data_size": 63488 00:12:28.702 }, 00:12:28.702 { 00:12:28.702 "name": null, 00:12:28.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.702 "is_configured": false, 00:12:28.702 "data_offset": 2048, 00:12:28.702 "data_size": 63488 00:12:28.702 } 00:12:28.702 ] 00:12:28.702 }' 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.702 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:28.960 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.960 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.960 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.960 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.960 [2024-10-30 09:47:07.387809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.960 [2024-10-30 09:47:07.387857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.960 [2024-10-30 09:47:07.387871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:28.960 [2024-10-30 09:47:07.387878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.960 [2024-10-30 09:47:07.388221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.960 [2024-10-30 09:47:07.388234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.960 [2024-10-30 09:47:07.388289] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.961 [2024-10-30 09:47:07.388305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.961 pt2 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 [2024-10-30 09:47:07.399818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.961 [2024-10-30 09:47:07.399863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.961 [2024-10-30 09:47:07.399875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:28.961 [2024-10-30 09:47:07.399883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.961 [2024-10-30 09:47:07.400213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.961 [2024-10-30 09:47:07.400231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.961 [2024-10-30 09:47:07.400285] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:28.961 [2024-10-30 09:47:07.400300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.961 [2024-10-30 09:47:07.400391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.961 [2024-10-30 09:47:07.400403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:28.961 [2024-10-30 09:47:07.400583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.961 [2024-10-30 09:47:07.403354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.961 [2024-10-30 09:47:07.403368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:28.961 [2024-10-30 09:47:07.403472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.961 pt3 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.961 "name": "raid_bdev1", 00:12:28.961 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:28.961 "strip_size_kb": 64, 00:12:28.961 "state": "online", 00:12:28.961 "raid_level": "raid5f", 00:12:28.961 "superblock": true, 00:12:28.961 "num_base_bdevs": 3, 00:12:28.961 "num_base_bdevs_discovered": 3, 00:12:28.961 "num_base_bdevs_operational": 3, 00:12:28.961 "base_bdevs_list": [ 00:12:28.961 { 00:12:28.961 "name": "pt1", 00:12:28.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.961 "is_configured": true, 00:12:28.961 "data_offset": 2048, 00:12:28.961 "data_size": 63488 00:12:28.961 }, 00:12:28.961 { 00:12:28.961 "name": "pt2", 00:12:28.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.961 "is_configured": true, 00:12:28.961 "data_offset": 2048, 00:12:28.961 "data_size": 63488 00:12:28.961 }, 00:12:28.961 { 00:12:28.961 "name": "pt3", 00:12:28.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.961 "is_configured": true, 00:12:28.961 "data_offset": 2048, 00:12:28.961 "data_size": 63488 00:12:28.961 } 00:12:28.961 ] 00:12:28.961 }' 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.961 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.219 [2024-10-30 09:47:07.722827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.219 "name": "raid_bdev1", 00:12:29.219 "aliases": [ 00:12:29.219 "46bec6c7-7687-4662-8415-6a74c9e2483a" 00:12:29.219 ], 00:12:29.219 "product_name": "Raid Volume", 00:12:29.219 "block_size": 512, 00:12:29.219 "num_blocks": 126976, 00:12:29.219 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:29.219 "assigned_rate_limits": { 00:12:29.219 "rw_ios_per_sec": 0, 00:12:29.219 "rw_mbytes_per_sec": 0, 00:12:29.219 "r_mbytes_per_sec": 0, 00:12:29.219 "w_mbytes_per_sec": 0 00:12:29.219 }, 00:12:29.219 "claimed": false, 00:12:29.219 "zoned": false, 00:12:29.219 "supported_io_types": { 00:12:29.219 "read": true, 00:12:29.219 "write": true, 00:12:29.219 "unmap": false, 00:12:29.219 "flush": false, 00:12:29.219 "reset": true, 00:12:29.219 "nvme_admin": false, 00:12:29.219 "nvme_io": false, 00:12:29.219 "nvme_io_md": false, 00:12:29.219 "write_zeroes": true, 00:12:29.219 "zcopy": false, 00:12:29.219 "get_zone_info": false, 00:12:29.219 "zone_management": false, 00:12:29.219 "zone_append": false, 00:12:29.219 "compare": false, 00:12:29.219 "compare_and_write": false, 00:12:29.219 "abort": false, 00:12:29.219 "seek_hole": false, 00:12:29.219 "seek_data": false, 00:12:29.219 "copy": false, 00:12:29.219 "nvme_iov_md": false 00:12:29.219 }, 00:12:29.219 "driver_specific": { 00:12:29.219 "raid": { 00:12:29.219 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:29.219 "strip_size_kb": 64, 00:12:29.219 "state": "online", 00:12:29.219 "raid_level": "raid5f", 00:12:29.219 "superblock": true, 00:12:29.219 "num_base_bdevs": 3, 00:12:29.219 "num_base_bdevs_discovered": 3, 00:12:29.219 "num_base_bdevs_operational": 3, 00:12:29.219 "base_bdevs_list": [ 00:12:29.219 { 00:12:29.219 "name": "pt1", 00:12:29.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.219 "is_configured": true, 00:12:29.219 "data_offset": 2048, 00:12:29.219 "data_size": 63488 00:12:29.219 }, 00:12:29.219 { 00:12:29.219 "name": "pt2", 00:12:29.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.219 "is_configured": true, 00:12:29.219 "data_offset": 2048, 00:12:29.219 "data_size": 63488 00:12:29.219 }, 00:12:29.219 { 00:12:29.219 "name": "pt3", 00:12:29.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.219 "is_configured": true, 00:12:29.219 "data_offset": 2048, 00:12:29.219 "data_size": 63488 00:12:29.219 } 00:12:29.219 ] 00:12:29.219 } 00:12:29.219 } 00:12:29.219 }' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:29.219 pt2 00:12:29.219 pt3' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.219 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.477 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.478 [2024-10-30 09:47:07.914833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 46bec6c7-7687-4662-8415-6a74c9e2483a '!=' 46bec6c7-7687-4662-8415-6a74c9e2483a ']' 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.478 [2024-10-30 09:47:07.946722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.478 "name": "raid_bdev1", 00:12:29.478 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:29.478 "strip_size_kb": 64, 00:12:29.478 "state": "online", 00:12:29.478 "raid_level": "raid5f", 00:12:29.478 "superblock": true, 00:12:29.478 "num_base_bdevs": 3, 00:12:29.478 "num_base_bdevs_discovered": 2, 00:12:29.478 "num_base_bdevs_operational": 2, 00:12:29.478 "base_bdevs_list": [ 00:12:29.478 { 00:12:29.478 "name": null, 00:12:29.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.478 "is_configured": false, 00:12:29.478 "data_offset": 0, 00:12:29.478 "data_size": 63488 00:12:29.478 }, 00:12:29.478 { 00:12:29.478 "name": "pt2", 00:12:29.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.478 "is_configured": true, 00:12:29.478 "data_offset": 2048, 00:12:29.478 "data_size": 63488 00:12:29.478 }, 00:12:29.478 { 00:12:29.478 "name": "pt3", 00:12:29.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.478 "is_configured": true, 00:12:29.478 "data_offset": 2048, 00:12:29.478 "data_size": 63488 00:12:29.478 } 00:12:29.478 ] 00:12:29.478 }' 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.478 09:47:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 [2024-10-30 09:47:08.258738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.736 [2024-10-30 09:47:08.258761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.736 [2024-10-30 09:47:08.258814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.736 [2024-10-30 09:47:08.258862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.736 [2024-10-30 09:47:08.258872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 [2024-10-30 09:47:08.318728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.736 [2024-10-30 09:47:08.318773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.736 [2024-10-30 09:47:08.318788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:29.736 [2024-10-30 09:47:08.318819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.736 [2024-10-30 09:47:08.320563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.736 [2024-10-30 09:47:08.320673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.736 [2024-10-30 09:47:08.320740] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:29.736 [2024-10-30 09:47:08.320776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.736 pt2 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.995 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.995 "name": "raid_bdev1", 00:12:29.995 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:29.995 "strip_size_kb": 64, 00:12:29.995 "state": "configuring", 00:12:29.995 "raid_level": "raid5f", 00:12:29.995 "superblock": true, 00:12:29.995 "num_base_bdevs": 3, 00:12:29.995 "num_base_bdevs_discovered": 1, 00:12:29.995 "num_base_bdevs_operational": 2, 00:12:29.995 "base_bdevs_list": [ 00:12:29.995 { 00:12:29.995 "name": null, 00:12:29.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.995 "is_configured": false, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 }, 00:12:29.995 { 00:12:29.995 "name": "pt2", 00:12:29.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.995 "is_configured": true, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 }, 00:12:29.995 { 00:12:29.995 "name": null, 00:12:29.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.995 "is_configured": false, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 } 00:12:29.995 ] 00:12:29.995 }' 00:12:29.995 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.995 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.254 [2024-10-30 09:47:08.638803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.254 [2024-10-30 09:47:08.638856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.254 [2024-10-30 09:47:08.638874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:30.254 [2024-10-30 09:47:08.638883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.254 [2024-10-30 09:47:08.639244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.254 [2024-10-30 09:47:08.639259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.254 [2024-10-30 09:47:08.639318] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.254 [2024-10-30 09:47:08.639339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.254 [2024-10-30 09:47:08.639421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.254 [2024-10-30 09:47:08.639430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:30.254 [2024-10-30 09:47:08.639632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:30.254 [2024-10-30 09:47:08.642474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.254 [2024-10-30 09:47:08.642488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:30.254 [2024-10-30 09:47:08.642655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.254 pt3 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.254 "name": "raid_bdev1", 00:12:30.254 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:30.254 "strip_size_kb": 64, 00:12:30.254 "state": "online", 00:12:30.254 "raid_level": "raid5f", 00:12:30.254 "superblock": true, 00:12:30.254 "num_base_bdevs": 3, 00:12:30.254 "num_base_bdevs_discovered": 2, 00:12:30.254 "num_base_bdevs_operational": 2, 00:12:30.254 "base_bdevs_list": [ 00:12:30.254 { 00:12:30.254 "name": null, 00:12:30.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.254 "is_configured": false, 00:12:30.254 "data_offset": 2048, 00:12:30.254 "data_size": 63488 00:12:30.254 }, 00:12:30.254 { 00:12:30.254 "name": "pt2", 00:12:30.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.254 "is_configured": true, 00:12:30.254 "data_offset": 2048, 00:12:30.254 "data_size": 63488 00:12:30.254 }, 00:12:30.254 { 00:12:30.254 "name": "pt3", 00:12:30.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.254 "is_configured": true, 00:12:30.254 "data_offset": 2048, 00:12:30.254 "data_size": 63488 00:12:30.254 } 00:12:30.254 ] 00:12:30.254 }' 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.254 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 [2024-10-30 09:47:08.969979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.516 [2024-10-30 09:47:08.970007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.516 [2024-10-30 09:47:08.970072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.516 [2024-10-30 09:47:08.970124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.516 [2024-10-30 09:47:08.970131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 09:47:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.516 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 [2024-10-30 09:47:09.022009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.516 [2024-10-30 09:47:09.022055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.516 [2024-10-30 09:47:09.022083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:30.516 [2024-10-30 09:47:09.022091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.516 [2024-10-30 09:47:09.023894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.516 [2024-10-30 09:47:09.023924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.517 [2024-10-30 09:47:09.023987] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.517 [2024-10-30 09:47:09.024020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.517 [2024-10-30 09:47:09.024125] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:30.517 [2024-10-30 09:47:09.024134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.517 [2024-10-30 09:47:09.024148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:30.517 [2024-10-30 09:47:09.024187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.517 pt1 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.517 "name": "raid_bdev1", 00:12:30.517 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:30.517 "strip_size_kb": 64, 00:12:30.517 "state": "configuring", 00:12:30.517 "raid_level": "raid5f", 00:12:30.517 "superblock": true, 00:12:30.517 "num_base_bdevs": 3, 00:12:30.517 "num_base_bdevs_discovered": 1, 00:12:30.517 "num_base_bdevs_operational": 2, 00:12:30.517 "base_bdevs_list": [ 00:12:30.517 { 00:12:30.517 "name": null, 00:12:30.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.517 "is_configured": false, 00:12:30.517 "data_offset": 2048, 00:12:30.517 "data_size": 63488 00:12:30.517 }, 00:12:30.517 { 00:12:30.517 "name": "pt2", 00:12:30.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.517 "is_configured": true, 00:12:30.517 "data_offset": 2048, 00:12:30.517 "data_size": 63488 00:12:30.517 }, 00:12:30.517 { 00:12:30.517 "name": null, 00:12:30.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.517 "is_configured": false, 00:12:30.517 "data_offset": 2048, 00:12:30.517 "data_size": 63488 00:12:30.517 } 00:12:30.517 ] 00:12:30.517 }' 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.517 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.775 [2024-10-30 09:47:09.382094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.775 [2024-10-30 09:47:09.382146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.775 [2024-10-30 09:47:09.382162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:30.775 [2024-10-30 09:47:09.382169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.775 [2024-10-30 09:47:09.382523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.775 [2024-10-30 09:47:09.382534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.775 [2024-10-30 09:47:09.382595] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.775 [2024-10-30 09:47:09.382611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.775 [2024-10-30 09:47:09.382700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:30.775 [2024-10-30 09:47:09.382707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:30.775 [2024-10-30 09:47:09.382895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:30.775 [2024-10-30 09:47:09.385703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:30.775 pt3 00:12:30.775 [2024-10-30 09:47:09.385818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:30.775 [2024-10-30 09:47:09.385988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.775 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.033 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.033 "name": "raid_bdev1", 00:12:31.033 "uuid": "46bec6c7-7687-4662-8415-6a74c9e2483a", 00:12:31.033 "strip_size_kb": 64, 00:12:31.033 "state": "online", 00:12:31.033 "raid_level": "raid5f", 00:12:31.033 "superblock": true, 00:12:31.033 "num_base_bdevs": 3, 00:12:31.033 "num_base_bdevs_discovered": 2, 00:12:31.033 "num_base_bdevs_operational": 2, 00:12:31.033 "base_bdevs_list": [ 00:12:31.033 { 00:12:31.033 "name": null, 00:12:31.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.033 "is_configured": false, 00:12:31.033 "data_offset": 2048, 00:12:31.033 "data_size": 63488 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "name": "pt2", 00:12:31.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.033 "is_configured": true, 00:12:31.033 "data_offset": 2048, 00:12:31.033 "data_size": 63488 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "name": "pt3", 00:12:31.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.033 "is_configured": true, 00:12:31.033 "data_offset": 2048, 00:12:31.033 "data_size": 63488 00:12:31.033 } 00:12:31.033 ] 00:12:31.033 }' 00:12:31.033 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.033 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 [2024-10-30 09:47:09.726322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 46bec6c7-7687-4662-8415-6a74c9e2483a '!=' 46bec6c7-7687-4662-8415-6a74c9e2483a ']' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78925 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 78925 ']' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 78925 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78925 00:12:31.293 killing process with pid 78925 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78925' 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 78925 00:12:31.293 [2024-10-30 09:47:09.772639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.293 09:47:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 78925 00:12:31.293 [2024-10-30 09:47:09.772729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.293 [2024-10-30 09:47:09.772793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.293 [2024-10-30 09:47:09.772804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:31.551 [2024-10-30 09:47:09.917207] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.200 09:47:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:32.200 00:12:32.200 real 0m5.483s 00:12:32.200 user 0m8.705s 00:12:32.200 sys 0m0.905s 00:12:32.200 ************************************ 00:12:32.200 END TEST raid5f_superblock_test 00:12:32.200 ************************************ 00:12:32.200 09:47:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:32.200 09:47:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:47:10 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:12:32.200 09:47:10 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:12:32.200 09:47:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:32.200 09:47:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:32.200 09:47:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 ************************************ 00:12:32.200 START TEST raid5f_rebuild_test 00:12:32.200 ************************************ 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:32.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79341 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79341 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 79341 ']' 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.200 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.200 Zero copy mechanism will not be used. 00:12:32.201 [2024-10-30 09:47:10.602974] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:12:32.201 [2024-10-30 09:47:10.603090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79341 ] 00:12:32.201 [2024-10-30 09:47:10.753170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.459 [2024-10-30 09:47:10.834727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.459 [2024-10-30 09:47:10.941052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.459 [2024-10-30 09:47:10.941087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.026 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 BaseBdev1_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 [2024-10-30 09:47:11.521772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.027 [2024-10-30 09:47:11.521937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.027 [2024-10-30 09:47:11.521973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.027 [2024-10-30 09:47:11.522347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.027 [2024-10-30 09:47:11.524107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.027 [2024-10-30 09:47:11.524134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.027 BaseBdev1 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 BaseBdev2_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 [2024-10-30 09:47:11.553069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:33.027 [2024-10-30 09:47:11.553115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.027 [2024-10-30 09:47:11.553129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:33.027 [2024-10-30 09:47:11.553139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.027 [2024-10-30 09:47:11.554786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.027 [2024-10-30 09:47:11.554816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.027 BaseBdev2 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 BaseBdev3_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 [2024-10-30 09:47:11.597898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:33.027 [2024-10-30 09:47:11.597945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.027 [2024-10-30 09:47:11.597961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:33.027 [2024-10-30 09:47:11.597970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.027 [2024-10-30 09:47:11.599650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.027 [2024-10-30 09:47:11.599683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.027 BaseBdev3 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 spare_malloc 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 spare_delay 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.027 [2024-10-30 09:47:11.641166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.027 [2024-10-30 09:47:11.641209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.027 [2024-10-30 09:47:11.641225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:33.027 [2024-10-30 09:47:11.641235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.027 [2024-10-30 09:47:11.642944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.027 [2024-10-30 09:47:11.642977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.027 spare 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.027 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.285 [2024-10-30 09:47:11.649212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.285 [2024-10-30 09:47:11.650828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.285 [2024-10-30 09:47:11.650947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.285 [2024-10-30 09:47:11.651030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:33.285 [2024-10-30 09:47:11.651079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:33.285 [2024-10-30 09:47:11.651386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.285 [2024-10-30 09:47:11.654482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:33.285 [2024-10-30 09:47:11.654558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:33.285 [2024-10-30 09:47:11.654746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.285 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.286 "name": "raid_bdev1", 00:12:33.286 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:33.286 "strip_size_kb": 64, 00:12:33.286 "state": "online", 00:12:33.286 "raid_level": "raid5f", 00:12:33.286 "superblock": false, 00:12:33.286 "num_base_bdevs": 3, 00:12:33.286 "num_base_bdevs_discovered": 3, 00:12:33.286 "num_base_bdevs_operational": 3, 00:12:33.286 "base_bdevs_list": [ 00:12:33.286 { 00:12:33.286 "name": "BaseBdev1", 00:12:33.286 "uuid": "466efe95-e24b-50a5-ace7-068deb7c7d0d", 00:12:33.286 "is_configured": true, 00:12:33.286 "data_offset": 0, 00:12:33.286 "data_size": 65536 00:12:33.286 }, 00:12:33.286 { 00:12:33.286 "name": "BaseBdev2", 00:12:33.286 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:33.286 "is_configured": true, 00:12:33.286 "data_offset": 0, 00:12:33.286 "data_size": 65536 00:12:33.286 }, 00:12:33.286 { 00:12:33.286 "name": "BaseBdev3", 00:12:33.286 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:33.286 "is_configured": true, 00:12:33.286 "data_offset": 0, 00:12:33.286 "data_size": 65536 00:12:33.286 } 00:12:33.286 ] 00:12:33.286 }' 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.286 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.544 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:33.544 09:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.544 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.544 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.544 [2024-10-30 09:47:11.982984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.544 09:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.544 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:33.802 [2024-10-30 09:47:12.226892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:33.802 /dev/nbd0 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.802 1+0 records in 00:12:33.802 1+0 records out 00:12:33.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220838 s, 18.5 MB/s 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:33.802 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.803 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.803 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:33.803 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:12:33.803 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:12:33.803 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:12:34.061 512+0 records in 00:12:34.061 512+0 records out 00:12:34.061 67108864 bytes (67 MB, 64 MiB) copied, 0.346137 s, 194 MB/s 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.061 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.319 [2024-10-30 09:47:12.896547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.319 [2024-10-30 09:47:12.920673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.319 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.577 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.577 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.577 "name": "raid_bdev1", 00:12:34.577 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:34.577 "strip_size_kb": 64, 00:12:34.577 "state": "online", 00:12:34.577 "raid_level": "raid5f", 00:12:34.577 "superblock": false, 00:12:34.577 "num_base_bdevs": 3, 00:12:34.577 "num_base_bdevs_discovered": 2, 00:12:34.577 "num_base_bdevs_operational": 2, 00:12:34.577 "base_bdevs_list": [ 00:12:34.577 { 00:12:34.577 "name": null, 00:12:34.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.577 "is_configured": false, 00:12:34.577 "data_offset": 0, 00:12:34.577 "data_size": 65536 00:12:34.577 }, 00:12:34.577 { 00:12:34.577 "name": "BaseBdev2", 00:12:34.577 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:34.577 "is_configured": true, 00:12:34.577 "data_offset": 0, 00:12:34.577 "data_size": 65536 00:12:34.577 }, 00:12:34.577 { 00:12:34.577 "name": "BaseBdev3", 00:12:34.577 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:34.577 "is_configured": true, 00:12:34.577 "data_offset": 0, 00:12:34.577 "data_size": 65536 00:12:34.577 } 00:12:34.577 ] 00:12:34.577 }' 00:12:34.577 09:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.577 09:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 09:47:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:34.836 09:47:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.836 09:47:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 [2024-10-30 09:47:13.236757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.836 [2024-10-30 09:47:13.247508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:12:34.836 09:47:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.836 09:47:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:34.836 [2024-10-30 09:47:13.253140] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.809 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.810 "name": "raid_bdev1", 00:12:35.810 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:35.810 "strip_size_kb": 64, 00:12:35.810 "state": "online", 00:12:35.810 "raid_level": "raid5f", 00:12:35.810 "superblock": false, 00:12:35.810 "num_base_bdevs": 3, 00:12:35.810 "num_base_bdevs_discovered": 3, 00:12:35.810 "num_base_bdevs_operational": 3, 00:12:35.810 "process": { 00:12:35.810 "type": "rebuild", 00:12:35.810 "target": "spare", 00:12:35.810 "progress": { 00:12:35.810 "blocks": 18432, 00:12:35.810 "percent": 14 00:12:35.810 } 00:12:35.810 }, 00:12:35.810 "base_bdevs_list": [ 00:12:35.810 { 00:12:35.810 "name": "spare", 00:12:35.810 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "name": "BaseBdev2", 00:12:35.810 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "name": "BaseBdev3", 00:12:35.810 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 } 00:12:35.810 ] 00:12:35.810 }' 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 [2024-10-30 09:47:14.358221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.810 [2024-10-30 09:47:14.363068] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.810 [2024-10-30 09:47:14.363119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.810 [2024-10-30 09:47:14.363136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.810 [2024-10-30 09:47:14.363144] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.810 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.082 "name": "raid_bdev1", 00:12:36.082 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:36.082 "strip_size_kb": 64, 00:12:36.082 "state": "online", 00:12:36.082 "raid_level": "raid5f", 00:12:36.082 "superblock": false, 00:12:36.082 "num_base_bdevs": 3, 00:12:36.082 "num_base_bdevs_discovered": 2, 00:12:36.082 "num_base_bdevs_operational": 2, 00:12:36.082 "base_bdevs_list": [ 00:12:36.082 { 00:12:36.082 "name": null, 00:12:36.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.082 "is_configured": false, 00:12:36.082 "data_offset": 0, 00:12:36.082 "data_size": 65536 00:12:36.082 }, 00:12:36.082 { 00:12:36.082 "name": "BaseBdev2", 00:12:36.082 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:36.082 "is_configured": true, 00:12:36.082 "data_offset": 0, 00:12:36.082 "data_size": 65536 00:12:36.082 }, 00:12:36.082 { 00:12:36.082 "name": "BaseBdev3", 00:12:36.082 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:36.082 "is_configured": true, 00:12:36.082 "data_offset": 0, 00:12:36.082 "data_size": 65536 00:12:36.082 } 00:12:36.082 ] 00:12:36.082 }' 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.082 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.083 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.083 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.083 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.341 "name": "raid_bdev1", 00:12:36.341 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:36.341 "strip_size_kb": 64, 00:12:36.341 "state": "online", 00:12:36.341 "raid_level": "raid5f", 00:12:36.341 "superblock": false, 00:12:36.341 "num_base_bdevs": 3, 00:12:36.341 "num_base_bdevs_discovered": 2, 00:12:36.341 "num_base_bdevs_operational": 2, 00:12:36.341 "base_bdevs_list": [ 00:12:36.341 { 00:12:36.341 "name": null, 00:12:36.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.341 "is_configured": false, 00:12:36.341 "data_offset": 0, 00:12:36.341 "data_size": 65536 00:12:36.341 }, 00:12:36.341 { 00:12:36.341 "name": "BaseBdev2", 00:12:36.341 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:36.341 "is_configured": true, 00:12:36.341 "data_offset": 0, 00:12:36.341 "data_size": 65536 00:12:36.341 }, 00:12:36.341 { 00:12:36.341 "name": "BaseBdev3", 00:12:36.341 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:36.341 "is_configured": true, 00:12:36.341 "data_offset": 0, 00:12:36.341 "data_size": 65536 00:12:36.341 } 00:12:36.341 ] 00:12:36.341 }' 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.341 [2024-10-30 09:47:14.796873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.341 [2024-10-30 09:47:14.805228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.341 09:47:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:36.341 [2024-10-30 09:47:14.809562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.274 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.274 "name": "raid_bdev1", 00:12:37.274 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:37.274 "strip_size_kb": 64, 00:12:37.274 "state": "online", 00:12:37.274 "raid_level": "raid5f", 00:12:37.274 "superblock": false, 00:12:37.274 "num_base_bdevs": 3, 00:12:37.274 "num_base_bdevs_discovered": 3, 00:12:37.274 "num_base_bdevs_operational": 3, 00:12:37.274 "process": { 00:12:37.274 "type": "rebuild", 00:12:37.274 "target": "spare", 00:12:37.274 "progress": { 00:12:37.274 "blocks": 20480, 00:12:37.274 "percent": 15 00:12:37.274 } 00:12:37.274 }, 00:12:37.274 "base_bdevs_list": [ 00:12:37.274 { 00:12:37.274 "name": "spare", 00:12:37.274 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:37.274 "is_configured": true, 00:12:37.274 "data_offset": 0, 00:12:37.274 "data_size": 65536 00:12:37.274 }, 00:12:37.275 { 00:12:37.275 "name": "BaseBdev2", 00:12:37.275 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:37.275 "is_configured": true, 00:12:37.275 "data_offset": 0, 00:12:37.275 "data_size": 65536 00:12:37.275 }, 00:12:37.275 { 00:12:37.275 "name": "BaseBdev3", 00:12:37.275 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:37.275 "is_configured": true, 00:12:37.275 "data_offset": 0, 00:12:37.275 "data_size": 65536 00:12:37.275 } 00:12:37.275 ] 00:12:37.275 }' 00:12:37.275 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.275 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.275 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=430 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.533 "name": "raid_bdev1", 00:12:37.533 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:37.533 "strip_size_kb": 64, 00:12:37.533 "state": "online", 00:12:37.533 "raid_level": "raid5f", 00:12:37.533 "superblock": false, 00:12:37.533 "num_base_bdevs": 3, 00:12:37.533 "num_base_bdevs_discovered": 3, 00:12:37.533 "num_base_bdevs_operational": 3, 00:12:37.533 "process": { 00:12:37.533 "type": "rebuild", 00:12:37.533 "target": "spare", 00:12:37.533 "progress": { 00:12:37.533 "blocks": 20480, 00:12:37.533 "percent": 15 00:12:37.533 } 00:12:37.533 }, 00:12:37.533 "base_bdevs_list": [ 00:12:37.533 { 00:12:37.533 "name": "spare", 00:12:37.533 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:37.533 "is_configured": true, 00:12:37.533 "data_offset": 0, 00:12:37.533 "data_size": 65536 00:12:37.533 }, 00:12:37.533 { 00:12:37.533 "name": "BaseBdev2", 00:12:37.533 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:37.533 "is_configured": true, 00:12:37.533 "data_offset": 0, 00:12:37.533 "data_size": 65536 00:12:37.533 }, 00:12:37.533 { 00:12:37.533 "name": "BaseBdev3", 00:12:37.533 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:37.533 "is_configured": true, 00:12:37.533 "data_offset": 0, 00:12:37.533 "data_size": 65536 00:12:37.533 } 00:12:37.533 ] 00:12:37.533 }' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.533 09:47:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.468 "name": "raid_bdev1", 00:12:38.468 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:38.468 "strip_size_kb": 64, 00:12:38.468 "state": "online", 00:12:38.468 "raid_level": "raid5f", 00:12:38.468 "superblock": false, 00:12:38.468 "num_base_bdevs": 3, 00:12:38.468 "num_base_bdevs_discovered": 3, 00:12:38.468 "num_base_bdevs_operational": 3, 00:12:38.468 "process": { 00:12:38.468 "type": "rebuild", 00:12:38.468 "target": "spare", 00:12:38.468 "progress": { 00:12:38.468 "blocks": 43008, 00:12:38.468 "percent": 32 00:12:38.468 } 00:12:38.468 }, 00:12:38.468 "base_bdevs_list": [ 00:12:38.468 { 00:12:38.468 "name": "spare", 00:12:38.468 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:38.468 "is_configured": true, 00:12:38.468 "data_offset": 0, 00:12:38.468 "data_size": 65536 00:12:38.468 }, 00:12:38.468 { 00:12:38.468 "name": "BaseBdev2", 00:12:38.468 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:38.468 "is_configured": true, 00:12:38.468 "data_offset": 0, 00:12:38.468 "data_size": 65536 00:12:38.468 }, 00:12:38.468 { 00:12:38.468 "name": "BaseBdev3", 00:12:38.468 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:38.468 "is_configured": true, 00:12:38.468 "data_offset": 0, 00:12:38.468 "data_size": 65536 00:12:38.468 } 00:12:38.468 ] 00:12:38.468 }' 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.468 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.726 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.726 09:47:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.666 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.666 "name": "raid_bdev1", 00:12:39.666 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:39.666 "strip_size_kb": 64, 00:12:39.666 "state": "online", 00:12:39.666 "raid_level": "raid5f", 00:12:39.666 "superblock": false, 00:12:39.666 "num_base_bdevs": 3, 00:12:39.666 "num_base_bdevs_discovered": 3, 00:12:39.666 "num_base_bdevs_operational": 3, 00:12:39.666 "process": { 00:12:39.667 "type": "rebuild", 00:12:39.667 "target": "spare", 00:12:39.667 "progress": { 00:12:39.667 "blocks": 65536, 00:12:39.667 "percent": 50 00:12:39.667 } 00:12:39.667 }, 00:12:39.667 "base_bdevs_list": [ 00:12:39.667 { 00:12:39.667 "name": "spare", 00:12:39.667 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:39.667 "is_configured": true, 00:12:39.667 "data_offset": 0, 00:12:39.667 "data_size": 65536 00:12:39.667 }, 00:12:39.667 { 00:12:39.667 "name": "BaseBdev2", 00:12:39.667 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:39.667 "is_configured": true, 00:12:39.667 "data_offset": 0, 00:12:39.667 "data_size": 65536 00:12:39.667 }, 00:12:39.667 { 00:12:39.667 "name": "BaseBdev3", 00:12:39.667 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:39.667 "is_configured": true, 00:12:39.667 "data_offset": 0, 00:12:39.667 "data_size": 65536 00:12:39.667 } 00:12:39.667 ] 00:12:39.667 }' 00:12:39.667 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.667 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.667 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.667 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.667 09:47:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.657 "name": "raid_bdev1", 00:12:40.657 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:40.657 "strip_size_kb": 64, 00:12:40.657 "state": "online", 00:12:40.657 "raid_level": "raid5f", 00:12:40.657 "superblock": false, 00:12:40.657 "num_base_bdevs": 3, 00:12:40.657 "num_base_bdevs_discovered": 3, 00:12:40.657 "num_base_bdevs_operational": 3, 00:12:40.657 "process": { 00:12:40.657 "type": "rebuild", 00:12:40.657 "target": "spare", 00:12:40.657 "progress": { 00:12:40.657 "blocks": 88064, 00:12:40.657 "percent": 67 00:12:40.657 } 00:12:40.657 }, 00:12:40.657 "base_bdevs_list": [ 00:12:40.657 { 00:12:40.657 "name": "spare", 00:12:40.657 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:40.657 "is_configured": true, 00:12:40.657 "data_offset": 0, 00:12:40.657 "data_size": 65536 00:12:40.657 }, 00:12:40.657 { 00:12:40.657 "name": "BaseBdev2", 00:12:40.657 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:40.657 "is_configured": true, 00:12:40.657 "data_offset": 0, 00:12:40.657 "data_size": 65536 00:12:40.657 }, 00:12:40.657 { 00:12:40.657 "name": "BaseBdev3", 00:12:40.657 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:40.657 "is_configured": true, 00:12:40.657 "data_offset": 0, 00:12:40.657 "data_size": 65536 00:12:40.657 } 00:12:40.657 ] 00:12:40.657 }' 00:12:40.657 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.916 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.916 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.916 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.916 09:47:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.847 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.848 "name": "raid_bdev1", 00:12:41.848 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:41.848 "strip_size_kb": 64, 00:12:41.848 "state": "online", 00:12:41.848 "raid_level": "raid5f", 00:12:41.848 "superblock": false, 00:12:41.848 "num_base_bdevs": 3, 00:12:41.848 "num_base_bdevs_discovered": 3, 00:12:41.848 "num_base_bdevs_operational": 3, 00:12:41.848 "process": { 00:12:41.848 "type": "rebuild", 00:12:41.848 "target": "spare", 00:12:41.848 "progress": { 00:12:41.848 "blocks": 110592, 00:12:41.848 "percent": 84 00:12:41.848 } 00:12:41.848 }, 00:12:41.848 "base_bdevs_list": [ 00:12:41.848 { 00:12:41.848 "name": "spare", 00:12:41.848 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:41.848 "is_configured": true, 00:12:41.848 "data_offset": 0, 00:12:41.848 "data_size": 65536 00:12:41.848 }, 00:12:41.848 { 00:12:41.848 "name": "BaseBdev2", 00:12:41.848 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:41.848 "is_configured": true, 00:12:41.848 "data_offset": 0, 00:12:41.848 "data_size": 65536 00:12:41.848 }, 00:12:41.848 { 00:12:41.848 "name": "BaseBdev3", 00:12:41.848 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:41.848 "is_configured": true, 00:12:41.848 "data_offset": 0, 00:12:41.848 "data_size": 65536 00:12:41.848 } 00:12:41.848 ] 00:12:41.848 }' 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.848 09:47:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.780 [2024-10-30 09:47:21.256574] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:42.780 [2024-10-30 09:47:21.256783] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:42.780 [2024-10-30 09:47:21.256825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.039 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.039 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.040 "name": "raid_bdev1", 00:12:43.040 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:43.040 "strip_size_kb": 64, 00:12:43.040 "state": "online", 00:12:43.040 "raid_level": "raid5f", 00:12:43.040 "superblock": false, 00:12:43.040 "num_base_bdevs": 3, 00:12:43.040 "num_base_bdevs_discovered": 3, 00:12:43.040 "num_base_bdevs_operational": 3, 00:12:43.040 "base_bdevs_list": [ 00:12:43.040 { 00:12:43.040 "name": "spare", 00:12:43.040 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev2", 00:12:43.040 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev3", 00:12:43.040 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 } 00:12:43.040 ] 00:12:43.040 }' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.040 "name": "raid_bdev1", 00:12:43.040 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:43.040 "strip_size_kb": 64, 00:12:43.040 "state": "online", 00:12:43.040 "raid_level": "raid5f", 00:12:43.040 "superblock": false, 00:12:43.040 "num_base_bdevs": 3, 00:12:43.040 "num_base_bdevs_discovered": 3, 00:12:43.040 "num_base_bdevs_operational": 3, 00:12:43.040 "base_bdevs_list": [ 00:12:43.040 { 00:12:43.040 "name": "spare", 00:12:43.040 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev2", 00:12:43.040 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev3", 00:12:43.040 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:43.040 "is_configured": true, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 65536 00:12:43.040 } 00:12:43.040 ] 00:12:43.040 }' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.040 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.299 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.299 "name": "raid_bdev1", 00:12:43.299 "uuid": "d446b603-0ed6-420c-947e-ca734d183c79", 00:12:43.299 "strip_size_kb": 64, 00:12:43.299 "state": "online", 00:12:43.299 "raid_level": "raid5f", 00:12:43.299 "superblock": false, 00:12:43.299 "num_base_bdevs": 3, 00:12:43.299 "num_base_bdevs_discovered": 3, 00:12:43.299 "num_base_bdevs_operational": 3, 00:12:43.299 "base_bdevs_list": [ 00:12:43.299 { 00:12:43.299 "name": "spare", 00:12:43.299 "uuid": "fde72d69-325a-525b-a0b3-041434e033a3", 00:12:43.299 "is_configured": true, 00:12:43.299 "data_offset": 0, 00:12:43.299 "data_size": 65536 00:12:43.299 }, 00:12:43.299 { 00:12:43.299 "name": "BaseBdev2", 00:12:43.299 "uuid": "8921bcea-7fef-5e5d-a0d8-1d98138f4ec7", 00:12:43.299 "is_configured": true, 00:12:43.299 "data_offset": 0, 00:12:43.299 "data_size": 65536 00:12:43.299 }, 00:12:43.299 { 00:12:43.299 "name": "BaseBdev3", 00:12:43.299 "uuid": "ba8a5fae-a893-5ebf-a16e-53c00bee200f", 00:12:43.299 "is_configured": true, 00:12:43.299 "data_offset": 0, 00:12:43.299 "data_size": 65536 00:12:43.299 } 00:12:43.299 ] 00:12:43.299 }' 00:12:43.299 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.299 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.557 [2024-10-30 09:47:21.938370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.557 [2024-10-30 09:47:21.938394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.557 [2024-10-30 09:47:21.938456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.557 [2024-10-30 09:47:21.938526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.557 [2024-10-30 09:47:21.938539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.557 09:47:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:43.815 /dev/nbd0 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.815 1+0 records in 00:12:43.815 1+0 records out 00:12:43.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000133232 s, 30.7 MB/s 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.815 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:43.815 /dev/nbd1 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:44.073 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.074 1+0 records in 00:12:44.074 1+0 records out 00:12:44.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282613 s, 14.5 MB/s 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.074 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.332 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79341 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 79341 ']' 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 79341 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:12:44.591 09:47:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79341 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:44.591 killing process with pid 79341 00:12:44.591 Received shutdown signal, test time was about 60.000000 seconds 00:12:44.591 00:12:44.591 Latency(us) 00:12:44.591 [2024-10-30T09:47:23.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.591 [2024-10-30T09:47:23.211Z] =================================================================================================================== 00:12:44.591 [2024-10-30T09:47:23.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79341' 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 79341 00:12:44.591 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 79341 00:12:44.591 [2024-10-30 09:47:23.018310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.591 [2024-10-30 09:47:23.211061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.157 09:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:45.157 00:12:45.157 real 0m13.216s 00:12:45.157 user 0m16.059s 00:12:45.157 sys 0m1.514s 00:12:45.157 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:45.157 09:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.157 ************************************ 00:12:45.157 END TEST raid5f_rebuild_test 00:12:45.157 ************************************ 00:12:45.416 09:47:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:12:45.416 09:47:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:45.416 09:47:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.416 09:47:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.416 ************************************ 00:12:45.416 START TEST raid5f_rebuild_test_sb 00:12:45.416 ************************************ 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79765 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79765 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 79765 ']' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.416 09:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.416 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:45.416 Zero copy mechanism will not be used. 00:12:45.416 [2024-10-30 09:47:23.872623] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:12:45.416 [2024-10-30 09:47:23.872737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79765 ] 00:12:45.416 [2024-10-30 09:47:24.025897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.675 [2024-10-30 09:47:24.106075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.675 [2024-10-30 09:47:24.213373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.675 [2024-10-30 09:47:24.213417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 BaseBdev1_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 [2024-10-30 09:47:24.705753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:46.242 [2024-10-30 09:47:24.705810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.242 [2024-10-30 09:47:24.705827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:46.242 [2024-10-30 09:47:24.705836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.242 [2024-10-30 09:47:24.707547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.242 [2024-10-30 09:47:24.707581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.242 BaseBdev1 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 BaseBdev2_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 [2024-10-30 09:47:24.736928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:46.242 [2024-10-30 09:47:24.736977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.242 [2024-10-30 09:47:24.736991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:46.242 [2024-10-30 09:47:24.736999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.242 [2024-10-30 09:47:24.738690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.242 [2024-10-30 09:47:24.738722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:46.242 BaseBdev2 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 BaseBdev3_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 [2024-10-30 09:47:24.784890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:46.242 [2024-10-30 09:47:24.784942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.242 [2024-10-30 09:47:24.784959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:46.242 [2024-10-30 09:47:24.784968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.242 [2024-10-30 09:47:24.786667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.242 [2024-10-30 09:47:24.786700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:46.242 BaseBdev3 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 spare_malloc 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 spare_delay 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 [2024-10-30 09:47:24.828254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.242 [2024-10-30 09:47:24.828295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.242 [2024-10-30 09:47:24.828310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:46.242 [2024-10-30 09:47:24.828319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.242 [2024-10-30 09:47:24.830036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.242 [2024-10-30 09:47:24.830077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.242 spare 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.242 [2024-10-30 09:47:24.836309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.242 [2024-10-30 09:47:24.837778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.242 [2024-10-30 09:47:24.837836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.242 [2024-10-30 09:47:24.837967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:46.242 [2024-10-30 09:47:24.837983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:46.242 [2024-10-30 09:47:24.838191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.242 [2024-10-30 09:47:24.841188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:46.242 [2024-10-30 09:47:24.841208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:46.242 [2024-10-30 09:47:24.841346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.242 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.243 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.243 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.243 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.243 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.500 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.500 "name": "raid_bdev1", 00:12:46.500 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:46.500 "strip_size_kb": 64, 00:12:46.500 "state": "online", 00:12:46.500 "raid_level": "raid5f", 00:12:46.500 "superblock": true, 00:12:46.500 "num_base_bdevs": 3, 00:12:46.500 "num_base_bdevs_discovered": 3, 00:12:46.500 "num_base_bdevs_operational": 3, 00:12:46.500 "base_bdevs_list": [ 00:12:46.500 { 00:12:46.500 "name": "BaseBdev1", 00:12:46.500 "uuid": "14b71756-2d87-5705-a611-68e36cc3d339", 00:12:46.500 "is_configured": true, 00:12:46.500 "data_offset": 2048, 00:12:46.500 "data_size": 63488 00:12:46.500 }, 00:12:46.500 { 00:12:46.500 "name": "BaseBdev2", 00:12:46.500 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:46.500 "is_configured": true, 00:12:46.500 "data_offset": 2048, 00:12:46.500 "data_size": 63488 00:12:46.500 }, 00:12:46.500 { 00:12:46.500 "name": "BaseBdev3", 00:12:46.500 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:46.500 "is_configured": true, 00:12:46.500 "data_offset": 2048, 00:12:46.500 "data_size": 63488 00:12:46.500 } 00:12:46.500 ] 00:12:46.500 }' 00:12:46.500 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.500 09:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.758 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.758 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.758 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.758 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.758 [2024-10-30 09:47:25.133558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.759 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:47.016 [2024-10-30 09:47:25.381480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:47.016 /dev/nbd0 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.016 1+0 records in 00:12:47.016 1+0 records out 00:12:47.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192195 s, 21.3 MB/s 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:12:47.016 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:12:47.273 496+0 records in 00:12:47.273 496+0 records out 00:12:47.273 65011712 bytes (65 MB, 62 MiB) copied, 0.314349 s, 207 MB/s 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:47.273 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.274 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.532 [2024-10-30 09:47:25.937506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.532 [2024-10-30 09:47:25.968812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.532 09:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.532 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.532 "name": "raid_bdev1", 00:12:47.532 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:47.532 "strip_size_kb": 64, 00:12:47.532 "state": "online", 00:12:47.532 "raid_level": "raid5f", 00:12:47.532 "superblock": true, 00:12:47.532 "num_base_bdevs": 3, 00:12:47.532 "num_base_bdevs_discovered": 2, 00:12:47.532 "num_base_bdevs_operational": 2, 00:12:47.532 "base_bdevs_list": [ 00:12:47.532 { 00:12:47.532 "name": null, 00:12:47.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.532 "is_configured": false, 00:12:47.532 "data_offset": 0, 00:12:47.532 "data_size": 63488 00:12:47.532 }, 00:12:47.532 { 00:12:47.532 "name": "BaseBdev2", 00:12:47.532 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:47.532 "is_configured": true, 00:12:47.532 "data_offset": 2048, 00:12:47.532 "data_size": 63488 00:12:47.532 }, 00:12:47.532 { 00:12:47.532 "name": "BaseBdev3", 00:12:47.532 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:47.532 "is_configured": true, 00:12:47.532 "data_offset": 2048, 00:12:47.532 "data_size": 63488 00:12:47.532 } 00:12:47.532 ] 00:12:47.532 }' 00:12:47.532 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.532 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.791 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.791 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.791 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.791 [2024-10-30 09:47:26.276870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.791 [2024-10-30 09:47:26.285569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:12:47.791 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.791 09:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:47.791 [2024-10-30 09:47:26.290037] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.722 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.722 "name": "raid_bdev1", 00:12:48.722 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:48.722 "strip_size_kb": 64, 00:12:48.722 "state": "online", 00:12:48.722 "raid_level": "raid5f", 00:12:48.722 "superblock": true, 00:12:48.722 "num_base_bdevs": 3, 00:12:48.722 "num_base_bdevs_discovered": 3, 00:12:48.722 "num_base_bdevs_operational": 3, 00:12:48.722 "process": { 00:12:48.722 "type": "rebuild", 00:12:48.722 "target": "spare", 00:12:48.722 "progress": { 00:12:48.722 "blocks": 20480, 00:12:48.722 "percent": 16 00:12:48.722 } 00:12:48.722 }, 00:12:48.723 "base_bdevs_list": [ 00:12:48.723 { 00:12:48.723 "name": "spare", 00:12:48.723 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:48.723 "is_configured": true, 00:12:48.723 "data_offset": 2048, 00:12:48.723 "data_size": 63488 00:12:48.723 }, 00:12:48.723 { 00:12:48.723 "name": "BaseBdev2", 00:12:48.723 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:48.723 "is_configured": true, 00:12:48.723 "data_offset": 2048, 00:12:48.723 "data_size": 63488 00:12:48.723 }, 00:12:48.723 { 00:12:48.723 "name": "BaseBdev3", 00:12:48.723 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:48.723 "is_configured": true, 00:12:48.723 "data_offset": 2048, 00:12:48.723 "data_size": 63488 00:12:48.723 } 00:12:48.723 ] 00:12:48.723 }' 00:12:48.723 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.980 [2024-10-30 09:47:27.394682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.980 [2024-10-30 09:47:27.398189] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.980 [2024-10-30 09:47:27.398236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.980 [2024-10-30 09:47:27.398250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.980 [2024-10-30 09:47:27.398257] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.980 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.981 "name": "raid_bdev1", 00:12:48.981 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:48.981 "strip_size_kb": 64, 00:12:48.981 "state": "online", 00:12:48.981 "raid_level": "raid5f", 00:12:48.981 "superblock": true, 00:12:48.981 "num_base_bdevs": 3, 00:12:48.981 "num_base_bdevs_discovered": 2, 00:12:48.981 "num_base_bdevs_operational": 2, 00:12:48.981 "base_bdevs_list": [ 00:12:48.981 { 00:12:48.981 "name": null, 00:12:48.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.981 "is_configured": false, 00:12:48.981 "data_offset": 0, 00:12:48.981 "data_size": 63488 00:12:48.981 }, 00:12:48.981 { 00:12:48.981 "name": "BaseBdev2", 00:12:48.981 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:48.981 "is_configured": true, 00:12:48.981 "data_offset": 2048, 00:12:48.981 "data_size": 63488 00:12:48.981 }, 00:12:48.981 { 00:12:48.981 "name": "BaseBdev3", 00:12:48.981 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:48.981 "is_configured": true, 00:12:48.981 "data_offset": 2048, 00:12:48.981 "data_size": 63488 00:12:48.981 } 00:12:48.981 ] 00:12:48.981 }' 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.981 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.240 "name": "raid_bdev1", 00:12:49.240 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:49.240 "strip_size_kb": 64, 00:12:49.240 "state": "online", 00:12:49.240 "raid_level": "raid5f", 00:12:49.240 "superblock": true, 00:12:49.240 "num_base_bdevs": 3, 00:12:49.240 "num_base_bdevs_discovered": 2, 00:12:49.240 "num_base_bdevs_operational": 2, 00:12:49.240 "base_bdevs_list": [ 00:12:49.240 { 00:12:49.240 "name": null, 00:12:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.240 "is_configured": false, 00:12:49.240 "data_offset": 0, 00:12:49.240 "data_size": 63488 00:12:49.240 }, 00:12:49.240 { 00:12:49.240 "name": "BaseBdev2", 00:12:49.240 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:49.240 "is_configured": true, 00:12:49.240 "data_offset": 2048, 00:12:49.240 "data_size": 63488 00:12:49.240 }, 00:12:49.240 { 00:12:49.240 "name": "BaseBdev3", 00:12:49.240 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:49.240 "is_configured": true, 00:12:49.240 "data_offset": 2048, 00:12:49.240 "data_size": 63488 00:12:49.240 } 00:12:49.240 ] 00:12:49.240 }' 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.240 [2024-10-30 09:47:27.839978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.240 [2024-10-30 09:47:27.848210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.240 09:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:49.240 [2024-10-30 09:47:27.852459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.612 "name": "raid_bdev1", 00:12:50.612 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:50.612 "strip_size_kb": 64, 00:12:50.612 "state": "online", 00:12:50.612 "raid_level": "raid5f", 00:12:50.612 "superblock": true, 00:12:50.612 "num_base_bdevs": 3, 00:12:50.612 "num_base_bdevs_discovered": 3, 00:12:50.612 "num_base_bdevs_operational": 3, 00:12:50.612 "process": { 00:12:50.612 "type": "rebuild", 00:12:50.612 "target": "spare", 00:12:50.612 "progress": { 00:12:50.612 "blocks": 20480, 00:12:50.612 "percent": 16 00:12:50.612 } 00:12:50.612 }, 00:12:50.612 "base_bdevs_list": [ 00:12:50.612 { 00:12:50.612 "name": "spare", 00:12:50.612 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:50.612 "is_configured": true, 00:12:50.612 "data_offset": 2048, 00:12:50.612 "data_size": 63488 00:12:50.612 }, 00:12:50.612 { 00:12:50.612 "name": "BaseBdev2", 00:12:50.612 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:50.612 "is_configured": true, 00:12:50.612 "data_offset": 2048, 00:12:50.612 "data_size": 63488 00:12:50.612 }, 00:12:50.612 { 00:12:50.612 "name": "BaseBdev3", 00:12:50.612 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:50.612 "is_configured": true, 00:12:50.612 "data_offset": 2048, 00:12:50.612 "data_size": 63488 00:12:50.612 } 00:12:50.612 ] 00:12:50.612 }' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:50.612 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.612 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.612 "name": "raid_bdev1", 00:12:50.612 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:50.612 "strip_size_kb": 64, 00:12:50.612 "state": "online", 00:12:50.612 "raid_level": "raid5f", 00:12:50.612 "superblock": true, 00:12:50.612 "num_base_bdevs": 3, 00:12:50.612 "num_base_bdevs_discovered": 3, 00:12:50.612 "num_base_bdevs_operational": 3, 00:12:50.612 "process": { 00:12:50.612 "type": "rebuild", 00:12:50.612 "target": "spare", 00:12:50.612 "progress": { 00:12:50.613 "blocks": 22528, 00:12:50.613 "percent": 17 00:12:50.613 } 00:12:50.613 }, 00:12:50.613 "base_bdevs_list": [ 00:12:50.613 { 00:12:50.613 "name": "spare", 00:12:50.613 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:50.613 "is_configured": true, 00:12:50.613 "data_offset": 2048, 00:12:50.613 "data_size": 63488 00:12:50.613 }, 00:12:50.613 { 00:12:50.613 "name": "BaseBdev2", 00:12:50.613 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:50.613 "is_configured": true, 00:12:50.613 "data_offset": 2048, 00:12:50.613 "data_size": 63488 00:12:50.613 }, 00:12:50.613 { 00:12:50.613 "name": "BaseBdev3", 00:12:50.613 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:50.613 "is_configured": true, 00:12:50.613 "data_offset": 2048, 00:12:50.613 "data_size": 63488 00:12:50.613 } 00:12:50.613 ] 00:12:50.613 }' 00:12:50.613 09:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.613 09:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.613 09:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.613 09:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.613 09:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.561 "name": "raid_bdev1", 00:12:51.561 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:51.561 "strip_size_kb": 64, 00:12:51.561 "state": "online", 00:12:51.561 "raid_level": "raid5f", 00:12:51.561 "superblock": true, 00:12:51.561 "num_base_bdevs": 3, 00:12:51.561 "num_base_bdevs_discovered": 3, 00:12:51.561 "num_base_bdevs_operational": 3, 00:12:51.561 "process": { 00:12:51.561 "type": "rebuild", 00:12:51.561 "target": "spare", 00:12:51.561 "progress": { 00:12:51.561 "blocks": 43008, 00:12:51.561 "percent": 33 00:12:51.561 } 00:12:51.561 }, 00:12:51.561 "base_bdevs_list": [ 00:12:51.561 { 00:12:51.561 "name": "spare", 00:12:51.561 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:51.561 "is_configured": true, 00:12:51.561 "data_offset": 2048, 00:12:51.561 "data_size": 63488 00:12:51.561 }, 00:12:51.561 { 00:12:51.561 "name": "BaseBdev2", 00:12:51.561 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:51.561 "is_configured": true, 00:12:51.561 "data_offset": 2048, 00:12:51.561 "data_size": 63488 00:12:51.561 }, 00:12:51.561 { 00:12:51.561 "name": "BaseBdev3", 00:12:51.561 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:51.561 "is_configured": true, 00:12:51.561 "data_offset": 2048, 00:12:51.561 "data_size": 63488 00:12:51.561 } 00:12:51.561 ] 00:12:51.561 }' 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.561 09:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.935 "name": "raid_bdev1", 00:12:52.935 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:52.935 "strip_size_kb": 64, 00:12:52.935 "state": "online", 00:12:52.935 "raid_level": "raid5f", 00:12:52.935 "superblock": true, 00:12:52.935 "num_base_bdevs": 3, 00:12:52.935 "num_base_bdevs_discovered": 3, 00:12:52.935 "num_base_bdevs_operational": 3, 00:12:52.935 "process": { 00:12:52.935 "type": "rebuild", 00:12:52.935 "target": "spare", 00:12:52.935 "progress": { 00:12:52.935 "blocks": 65536, 00:12:52.935 "percent": 51 00:12:52.935 } 00:12:52.935 }, 00:12:52.935 "base_bdevs_list": [ 00:12:52.935 { 00:12:52.935 "name": "spare", 00:12:52.935 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:52.935 "is_configured": true, 00:12:52.935 "data_offset": 2048, 00:12:52.935 "data_size": 63488 00:12:52.935 }, 00:12:52.935 { 00:12:52.935 "name": "BaseBdev2", 00:12:52.935 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:52.935 "is_configured": true, 00:12:52.935 "data_offset": 2048, 00:12:52.935 "data_size": 63488 00:12:52.935 }, 00:12:52.935 { 00:12:52.935 "name": "BaseBdev3", 00:12:52.935 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:52.935 "is_configured": true, 00:12:52.935 "data_offset": 2048, 00:12:52.935 "data_size": 63488 00:12:52.935 } 00:12:52.935 ] 00:12:52.935 }' 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.935 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.936 09:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.870 "name": "raid_bdev1", 00:12:53.870 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:53.870 "strip_size_kb": 64, 00:12:53.870 "state": "online", 00:12:53.870 "raid_level": "raid5f", 00:12:53.870 "superblock": true, 00:12:53.870 "num_base_bdevs": 3, 00:12:53.870 "num_base_bdevs_discovered": 3, 00:12:53.870 "num_base_bdevs_operational": 3, 00:12:53.870 "process": { 00:12:53.870 "type": "rebuild", 00:12:53.870 "target": "spare", 00:12:53.870 "progress": { 00:12:53.870 "blocks": 88064, 00:12:53.870 "percent": 69 00:12:53.870 } 00:12:53.870 }, 00:12:53.870 "base_bdevs_list": [ 00:12:53.870 { 00:12:53.870 "name": "spare", 00:12:53.870 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:53.870 "is_configured": true, 00:12:53.870 "data_offset": 2048, 00:12:53.870 "data_size": 63488 00:12:53.870 }, 00:12:53.870 { 00:12:53.870 "name": "BaseBdev2", 00:12:53.870 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:53.870 "is_configured": true, 00:12:53.870 "data_offset": 2048, 00:12:53.870 "data_size": 63488 00:12:53.870 }, 00:12:53.870 { 00:12:53.870 "name": "BaseBdev3", 00:12:53.870 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:53.870 "is_configured": true, 00:12:53.870 "data_offset": 2048, 00:12:53.870 "data_size": 63488 00:12:53.870 } 00:12:53.870 ] 00:12:53.870 }' 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.870 09:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.806 "name": "raid_bdev1", 00:12:54.806 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:54.806 "strip_size_kb": 64, 00:12:54.806 "state": "online", 00:12:54.806 "raid_level": "raid5f", 00:12:54.806 "superblock": true, 00:12:54.806 "num_base_bdevs": 3, 00:12:54.806 "num_base_bdevs_discovered": 3, 00:12:54.806 "num_base_bdevs_operational": 3, 00:12:54.806 "process": { 00:12:54.806 "type": "rebuild", 00:12:54.806 "target": "spare", 00:12:54.806 "progress": { 00:12:54.806 "blocks": 110592, 00:12:54.806 "percent": 87 00:12:54.806 } 00:12:54.806 }, 00:12:54.806 "base_bdevs_list": [ 00:12:54.806 { 00:12:54.806 "name": "spare", 00:12:54.806 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:54.806 "is_configured": true, 00:12:54.806 "data_offset": 2048, 00:12:54.806 "data_size": 63488 00:12:54.806 }, 00:12:54.806 { 00:12:54.806 "name": "BaseBdev2", 00:12:54.806 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:54.806 "is_configured": true, 00:12:54.806 "data_offset": 2048, 00:12:54.806 "data_size": 63488 00:12:54.806 }, 00:12:54.806 { 00:12:54.806 "name": "BaseBdev3", 00:12:54.806 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:54.806 "is_configured": true, 00:12:54.806 "data_offset": 2048, 00:12:54.806 "data_size": 63488 00:12:54.806 } 00:12:54.806 ] 00:12:54.806 }' 00:12:54.806 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.063 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.063 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.063 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.063 09:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.628 [2024-10-30 09:47:34.098977] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:55.628 [2024-10-30 09:47:34.099054] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:55.628 [2024-10-30 09:47:34.099163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.886 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.886 "name": "raid_bdev1", 00:12:55.886 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:55.886 "strip_size_kb": 64, 00:12:55.886 "state": "online", 00:12:55.886 "raid_level": "raid5f", 00:12:55.886 "superblock": true, 00:12:55.886 "num_base_bdevs": 3, 00:12:55.886 "num_base_bdevs_discovered": 3, 00:12:55.886 "num_base_bdevs_operational": 3, 00:12:55.886 "base_bdevs_list": [ 00:12:55.886 { 00:12:55.886 "name": "spare", 00:12:55.886 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:55.886 "is_configured": true, 00:12:55.886 "data_offset": 2048, 00:12:55.886 "data_size": 63488 00:12:55.886 }, 00:12:55.886 { 00:12:55.886 "name": "BaseBdev2", 00:12:55.886 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:55.886 "is_configured": true, 00:12:55.887 "data_offset": 2048, 00:12:55.887 "data_size": 63488 00:12:55.887 }, 00:12:55.887 { 00:12:55.887 "name": "BaseBdev3", 00:12:55.887 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:55.887 "is_configured": true, 00:12:55.887 "data_offset": 2048, 00:12:55.887 "data_size": 63488 00:12:55.887 } 00:12:55.887 ] 00:12:55.887 }' 00:12:55.887 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.145 "name": "raid_bdev1", 00:12:56.145 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:56.145 "strip_size_kb": 64, 00:12:56.145 "state": "online", 00:12:56.145 "raid_level": "raid5f", 00:12:56.145 "superblock": true, 00:12:56.145 "num_base_bdevs": 3, 00:12:56.145 "num_base_bdevs_discovered": 3, 00:12:56.145 "num_base_bdevs_operational": 3, 00:12:56.145 "base_bdevs_list": [ 00:12:56.145 { 00:12:56.145 "name": "spare", 00:12:56.145 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 }, 00:12:56.145 { 00:12:56.145 "name": "BaseBdev2", 00:12:56.145 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 }, 00:12:56.145 { 00:12:56.145 "name": "BaseBdev3", 00:12:56.145 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 } 00:12:56.145 ] 00:12:56.145 }' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.145 "name": "raid_bdev1", 00:12:56.145 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:56.145 "strip_size_kb": 64, 00:12:56.145 "state": "online", 00:12:56.145 "raid_level": "raid5f", 00:12:56.145 "superblock": true, 00:12:56.145 "num_base_bdevs": 3, 00:12:56.145 "num_base_bdevs_discovered": 3, 00:12:56.145 "num_base_bdevs_operational": 3, 00:12:56.145 "base_bdevs_list": [ 00:12:56.145 { 00:12:56.145 "name": "spare", 00:12:56.145 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 }, 00:12:56.145 { 00:12:56.145 "name": "BaseBdev2", 00:12:56.145 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 }, 00:12:56.145 { 00:12:56.145 "name": "BaseBdev3", 00:12:56.145 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:56.145 "is_configured": true, 00:12:56.145 "data_offset": 2048, 00:12:56.145 "data_size": 63488 00:12:56.145 } 00:12:56.145 ] 00:12:56.145 }' 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.145 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.403 [2024-10-30 09:47:34.976983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.403 [2024-10-30 09:47:34.977010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.403 [2024-10-30 09:47:34.977085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.403 [2024-10-30 09:47:34.977155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.403 [2024-10-30 09:47:34.977172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.403 09:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.403 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:56.660 /dev/nbd0 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.660 1+0 records in 00:12:56.660 1+0 records out 00:12:56.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170504 s, 24.0 MB/s 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.660 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:56.918 /dev/nbd1 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.918 1+0 records in 00:12:56.918 1+0 records out 00:12:56.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341113 s, 12.0 MB/s 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.918 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.176 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.434 09:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.434 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.434 [2024-10-30 09:47:36.052572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:57.434 [2024-10-30 09:47:36.052619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.434 [2024-10-30 09:47:36.052635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:57.434 [2024-10-30 09:47:36.052644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.692 [2024-10-30 09:47:36.054539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.692 [2024-10-30 09:47:36.054570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:57.693 [2024-10-30 09:47:36.054641] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:57.693 [2024-10-30 09:47:36.054682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.693 [2024-10-30 09:47:36.054789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.693 [2024-10-30 09:47:36.054865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.693 spare 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.693 [2024-10-30 09:47:36.154936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:57.693 [2024-10-30 09:47:36.154963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.693 [2024-10-30 09:47:36.155210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:12:57.693 [2024-10-30 09:47:36.158122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:57.693 [2024-10-30 09:47:36.158141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:57.693 [2024-10-30 09:47:36.158293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.693 "name": "raid_bdev1", 00:12:57.693 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:57.693 "strip_size_kb": 64, 00:12:57.693 "state": "online", 00:12:57.693 "raid_level": "raid5f", 00:12:57.693 "superblock": true, 00:12:57.693 "num_base_bdevs": 3, 00:12:57.693 "num_base_bdevs_discovered": 3, 00:12:57.693 "num_base_bdevs_operational": 3, 00:12:57.693 "base_bdevs_list": [ 00:12:57.693 { 00:12:57.693 "name": "spare", 00:12:57.693 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:57.693 "is_configured": true, 00:12:57.693 "data_offset": 2048, 00:12:57.693 "data_size": 63488 00:12:57.693 }, 00:12:57.693 { 00:12:57.693 "name": "BaseBdev2", 00:12:57.693 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:57.693 "is_configured": true, 00:12:57.693 "data_offset": 2048, 00:12:57.693 "data_size": 63488 00:12:57.693 }, 00:12:57.693 { 00:12:57.693 "name": "BaseBdev3", 00:12:57.693 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:57.693 "is_configured": true, 00:12:57.693 "data_offset": 2048, 00:12:57.693 "data_size": 63488 00:12:57.693 } 00:12:57.693 ] 00:12:57.693 }' 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.693 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.951 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.951 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.952 "name": "raid_bdev1", 00:12:57.952 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:57.952 "strip_size_kb": 64, 00:12:57.952 "state": "online", 00:12:57.952 "raid_level": "raid5f", 00:12:57.952 "superblock": true, 00:12:57.952 "num_base_bdevs": 3, 00:12:57.952 "num_base_bdevs_discovered": 3, 00:12:57.952 "num_base_bdevs_operational": 3, 00:12:57.952 "base_bdevs_list": [ 00:12:57.952 { 00:12:57.952 "name": "spare", 00:12:57.952 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:57.952 "is_configured": true, 00:12:57.952 "data_offset": 2048, 00:12:57.952 "data_size": 63488 00:12:57.952 }, 00:12:57.952 { 00:12:57.952 "name": "BaseBdev2", 00:12:57.952 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:57.952 "is_configured": true, 00:12:57.952 "data_offset": 2048, 00:12:57.952 "data_size": 63488 00:12:57.952 }, 00:12:57.952 { 00:12:57.952 "name": "BaseBdev3", 00:12:57.952 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:57.952 "is_configured": true, 00:12:57.952 "data_offset": 2048, 00:12:57.952 "data_size": 63488 00:12:57.952 } 00:12:57.952 ] 00:12:57.952 }' 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.952 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.209 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.209 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:58.209 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.209 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.210 [2024-10-30 09:47:36.610345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.210 "name": "raid_bdev1", 00:12:58.210 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:58.210 "strip_size_kb": 64, 00:12:58.210 "state": "online", 00:12:58.210 "raid_level": "raid5f", 00:12:58.210 "superblock": true, 00:12:58.210 "num_base_bdevs": 3, 00:12:58.210 "num_base_bdevs_discovered": 2, 00:12:58.210 "num_base_bdevs_operational": 2, 00:12:58.210 "base_bdevs_list": [ 00:12:58.210 { 00:12:58.210 "name": null, 00:12:58.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.210 "is_configured": false, 00:12:58.210 "data_offset": 0, 00:12:58.210 "data_size": 63488 00:12:58.210 }, 00:12:58.210 { 00:12:58.210 "name": "BaseBdev2", 00:12:58.210 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:58.210 "is_configured": true, 00:12:58.210 "data_offset": 2048, 00:12:58.210 "data_size": 63488 00:12:58.210 }, 00:12:58.210 { 00:12:58.210 "name": "BaseBdev3", 00:12:58.210 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:58.210 "is_configured": true, 00:12:58.210 "data_offset": 2048, 00:12:58.210 "data_size": 63488 00:12:58.210 } 00:12:58.210 ] 00:12:58.210 }' 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.210 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.468 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.468 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.468 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.468 [2024-10-30 09:47:36.938426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.468 [2024-10-30 09:47:36.938571] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.468 [2024-10-30 09:47:36.938585] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:58.468 [2024-10-30 09:47:36.938613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.468 [2024-10-30 09:47:36.946478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:12:58.468 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.468 09:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:58.468 [2024-10-30 09:47:36.950952] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.401 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.401 "name": "raid_bdev1", 00:12:59.401 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:59.401 "strip_size_kb": 64, 00:12:59.401 "state": "online", 00:12:59.401 "raid_level": "raid5f", 00:12:59.401 "superblock": true, 00:12:59.401 "num_base_bdevs": 3, 00:12:59.401 "num_base_bdevs_discovered": 3, 00:12:59.401 "num_base_bdevs_operational": 3, 00:12:59.401 "process": { 00:12:59.401 "type": "rebuild", 00:12:59.401 "target": "spare", 00:12:59.401 "progress": { 00:12:59.401 "blocks": 20480, 00:12:59.402 "percent": 16 00:12:59.402 } 00:12:59.402 }, 00:12:59.402 "base_bdevs_list": [ 00:12:59.402 { 00:12:59.402 "name": "spare", 00:12:59.402 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:12:59.402 "is_configured": true, 00:12:59.402 "data_offset": 2048, 00:12:59.402 "data_size": 63488 00:12:59.402 }, 00:12:59.402 { 00:12:59.402 "name": "BaseBdev2", 00:12:59.402 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:59.402 "is_configured": true, 00:12:59.402 "data_offset": 2048, 00:12:59.402 "data_size": 63488 00:12:59.402 }, 00:12:59.402 { 00:12:59.402 "name": "BaseBdev3", 00:12:59.402 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:59.402 "is_configured": true, 00:12:59.402 "data_offset": 2048, 00:12:59.402 "data_size": 63488 00:12:59.402 } 00:12:59.402 ] 00:12:59.402 }' 00:12:59.402 09:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.402 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.402 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.659 [2024-10-30 09:47:38.051760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.659 [2024-10-30 09:47:38.059121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.659 [2024-10-30 09:47:38.059166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.659 [2024-10-30 09:47:38.059178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.659 [2024-10-30 09:47:38.059185] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.659 "name": "raid_bdev1", 00:12:59.659 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:12:59.659 "strip_size_kb": 64, 00:12:59.659 "state": "online", 00:12:59.659 "raid_level": "raid5f", 00:12:59.659 "superblock": true, 00:12:59.659 "num_base_bdevs": 3, 00:12:59.659 "num_base_bdevs_discovered": 2, 00:12:59.659 "num_base_bdevs_operational": 2, 00:12:59.659 "base_bdevs_list": [ 00:12:59.659 { 00:12:59.659 "name": null, 00:12:59.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.659 "is_configured": false, 00:12:59.659 "data_offset": 0, 00:12:59.659 "data_size": 63488 00:12:59.659 }, 00:12:59.659 { 00:12:59.659 "name": "BaseBdev2", 00:12:59.659 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:12:59.659 "is_configured": true, 00:12:59.659 "data_offset": 2048, 00:12:59.659 "data_size": 63488 00:12:59.659 }, 00:12:59.659 { 00:12:59.659 "name": "BaseBdev3", 00:12:59.659 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:12:59.659 "is_configured": true, 00:12:59.659 "data_offset": 2048, 00:12:59.659 "data_size": 63488 00:12:59.659 } 00:12:59.659 ] 00:12:59.659 }' 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.659 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.917 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.917 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.917 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.917 [2024-10-30 09:47:38.388695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.917 [2024-10-30 09:47:38.388743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.917 [2024-10-30 09:47:38.388759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:59.917 [2024-10-30 09:47:38.388770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.917 [2024-10-30 09:47:38.389163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.917 [2024-10-30 09:47:38.389183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.917 [2024-10-30 09:47:38.389254] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:59.918 [2024-10-30 09:47:38.389266] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:59.918 [2024-10-30 09:47:38.389274] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:59.918 [2024-10-30 09:47:38.389290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.918 [2024-10-30 09:47:38.397335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:12:59.918 spare 00:12:59.918 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.918 09:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:59.918 [2024-10-30 09:47:38.401507] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.850 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.851 "name": "raid_bdev1", 00:13:00.851 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:00.851 "strip_size_kb": 64, 00:13:00.851 "state": "online", 00:13:00.851 "raid_level": "raid5f", 00:13:00.851 "superblock": true, 00:13:00.851 "num_base_bdevs": 3, 00:13:00.851 "num_base_bdevs_discovered": 3, 00:13:00.851 "num_base_bdevs_operational": 3, 00:13:00.851 "process": { 00:13:00.851 "type": "rebuild", 00:13:00.851 "target": "spare", 00:13:00.851 "progress": { 00:13:00.851 "blocks": 20480, 00:13:00.851 "percent": 16 00:13:00.851 } 00:13:00.851 }, 00:13:00.851 "base_bdevs_list": [ 00:13:00.851 { 00:13:00.851 "name": "spare", 00:13:00.851 "uuid": "9ad7cd16-8661-533f-bd96-2f1f1294d486", 00:13:00.851 "is_configured": true, 00:13:00.851 "data_offset": 2048, 00:13:00.851 "data_size": 63488 00:13:00.851 }, 00:13:00.851 { 00:13:00.851 "name": "BaseBdev2", 00:13:00.851 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:00.851 "is_configured": true, 00:13:00.851 "data_offset": 2048, 00:13:00.851 "data_size": 63488 00:13:00.851 }, 00:13:00.851 { 00:13:00.851 "name": "BaseBdev3", 00:13:00.851 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:00.851 "is_configured": true, 00:13:00.851 "data_offset": 2048, 00:13:00.851 "data_size": 63488 00:13:00.851 } 00:13:00.851 ] 00:13:00.851 }' 00:13:00.851 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.109 [2024-10-30 09:47:39.502569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.109 [2024-10-30 09:47:39.509496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.109 [2024-10-30 09:47:39.509546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.109 [2024-10-30 09:47:39.509560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.109 [2024-10-30 09:47:39.509566] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.109 "name": "raid_bdev1", 00:13:01.109 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:01.109 "strip_size_kb": 64, 00:13:01.109 "state": "online", 00:13:01.109 "raid_level": "raid5f", 00:13:01.109 "superblock": true, 00:13:01.109 "num_base_bdevs": 3, 00:13:01.109 "num_base_bdevs_discovered": 2, 00:13:01.109 "num_base_bdevs_operational": 2, 00:13:01.109 "base_bdevs_list": [ 00:13:01.109 { 00:13:01.109 "name": null, 00:13:01.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.109 "is_configured": false, 00:13:01.109 "data_offset": 0, 00:13:01.109 "data_size": 63488 00:13:01.109 }, 00:13:01.109 { 00:13:01.109 "name": "BaseBdev2", 00:13:01.109 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:01.109 "is_configured": true, 00:13:01.109 "data_offset": 2048, 00:13:01.109 "data_size": 63488 00:13:01.109 }, 00:13:01.109 { 00:13:01.109 "name": "BaseBdev3", 00:13:01.109 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:01.109 "is_configured": true, 00:13:01.109 "data_offset": 2048, 00:13:01.109 "data_size": 63488 00:13:01.109 } 00:13:01.109 ] 00:13:01.109 }' 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.109 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.367 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.367 "name": "raid_bdev1", 00:13:01.367 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:01.367 "strip_size_kb": 64, 00:13:01.368 "state": "online", 00:13:01.368 "raid_level": "raid5f", 00:13:01.368 "superblock": true, 00:13:01.368 "num_base_bdevs": 3, 00:13:01.368 "num_base_bdevs_discovered": 2, 00:13:01.368 "num_base_bdevs_operational": 2, 00:13:01.368 "base_bdevs_list": [ 00:13:01.368 { 00:13:01.368 "name": null, 00:13:01.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.368 "is_configured": false, 00:13:01.368 "data_offset": 0, 00:13:01.368 "data_size": 63488 00:13:01.368 }, 00:13:01.368 { 00:13:01.368 "name": "BaseBdev2", 00:13:01.368 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:01.368 "is_configured": true, 00:13:01.368 "data_offset": 2048, 00:13:01.368 "data_size": 63488 00:13:01.368 }, 00:13:01.368 { 00:13:01.368 "name": "BaseBdev3", 00:13:01.368 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:01.368 "is_configured": true, 00:13:01.368 "data_offset": 2048, 00:13:01.368 "data_size": 63488 00:13:01.368 } 00:13:01.368 ] 00:13:01.368 }' 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.368 [2024-10-30 09:47:39.959461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.368 [2024-10-30 09:47:39.959503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.368 [2024-10-30 09:47:39.959522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:01.368 [2024-10-30 09:47:39.959529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.368 [2024-10-30 09:47:39.959874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.368 [2024-10-30 09:47:39.959885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.368 [2024-10-30 09:47:39.959943] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:01.368 [2024-10-30 09:47:39.959955] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.368 [2024-10-30 09:47:39.959962] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:01.368 [2024-10-30 09:47:39.959969] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:01.368 BaseBdev1 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.368 09:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.766 "name": "raid_bdev1", 00:13:02.766 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:02.766 "strip_size_kb": 64, 00:13:02.766 "state": "online", 00:13:02.766 "raid_level": "raid5f", 00:13:02.766 "superblock": true, 00:13:02.766 "num_base_bdevs": 3, 00:13:02.766 "num_base_bdevs_discovered": 2, 00:13:02.766 "num_base_bdevs_operational": 2, 00:13:02.766 "base_bdevs_list": [ 00:13:02.766 { 00:13:02.766 "name": null, 00:13:02.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.766 "is_configured": false, 00:13:02.766 "data_offset": 0, 00:13:02.766 "data_size": 63488 00:13:02.766 }, 00:13:02.766 { 00:13:02.766 "name": "BaseBdev2", 00:13:02.766 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:02.766 "is_configured": true, 00:13:02.766 "data_offset": 2048, 00:13:02.766 "data_size": 63488 00:13:02.766 }, 00:13:02.766 { 00:13:02.766 "name": "BaseBdev3", 00:13:02.766 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:02.766 "is_configured": true, 00:13:02.766 "data_offset": 2048, 00:13:02.766 "data_size": 63488 00:13:02.766 } 00:13:02.766 ] 00:13:02.766 }' 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.766 09:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.766 "name": "raid_bdev1", 00:13:02.766 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:02.766 "strip_size_kb": 64, 00:13:02.766 "state": "online", 00:13:02.766 "raid_level": "raid5f", 00:13:02.766 "superblock": true, 00:13:02.766 "num_base_bdevs": 3, 00:13:02.766 "num_base_bdevs_discovered": 2, 00:13:02.766 "num_base_bdevs_operational": 2, 00:13:02.766 "base_bdevs_list": [ 00:13:02.766 { 00:13:02.766 "name": null, 00:13:02.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.766 "is_configured": false, 00:13:02.766 "data_offset": 0, 00:13:02.766 "data_size": 63488 00:13:02.766 }, 00:13:02.766 { 00:13:02.766 "name": "BaseBdev2", 00:13:02.766 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:02.766 "is_configured": true, 00:13:02.766 "data_offset": 2048, 00:13:02.766 "data_size": 63488 00:13:02.766 }, 00:13:02.766 { 00:13:02.766 "name": "BaseBdev3", 00:13:02.766 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:02.766 "is_configured": true, 00:13:02.766 "data_offset": 2048, 00:13:02.766 "data_size": 63488 00:13:02.766 } 00:13:02.766 ] 00:13:02.766 }' 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.766 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.766 [2024-10-30 09:47:41.379796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.766 [2024-10-30 09:47:41.379913] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:02.766 [2024-10-30 09:47:41.379925] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.030 request: 00:13:03.030 { 00:13:03.030 "base_bdev": "BaseBdev1", 00:13:03.030 "raid_bdev": "raid_bdev1", 00:13:03.030 "method": "bdev_raid_add_base_bdev", 00:13:03.030 "req_id": 1 00:13:03.030 } 00:13:03.030 Got JSON-RPC error response 00:13:03.030 response: 00:13:03.030 { 00:13:03.030 "code": -22, 00:13:03.030 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:03.030 } 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.030 09:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.966 "name": "raid_bdev1", 00:13:03.966 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:03.966 "strip_size_kb": 64, 00:13:03.966 "state": "online", 00:13:03.966 "raid_level": "raid5f", 00:13:03.966 "superblock": true, 00:13:03.966 "num_base_bdevs": 3, 00:13:03.966 "num_base_bdevs_discovered": 2, 00:13:03.966 "num_base_bdevs_operational": 2, 00:13:03.966 "base_bdevs_list": [ 00:13:03.966 { 00:13:03.966 "name": null, 00:13:03.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.966 "is_configured": false, 00:13:03.966 "data_offset": 0, 00:13:03.966 "data_size": 63488 00:13:03.966 }, 00:13:03.966 { 00:13:03.966 "name": "BaseBdev2", 00:13:03.966 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:03.966 "is_configured": true, 00:13:03.966 "data_offset": 2048, 00:13:03.966 "data_size": 63488 00:13:03.966 }, 00:13:03.966 { 00:13:03.966 "name": "BaseBdev3", 00:13:03.966 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:03.966 "is_configured": true, 00:13:03.966 "data_offset": 2048, 00:13:03.966 "data_size": 63488 00:13:03.966 } 00:13:03.966 ] 00:13:03.966 }' 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.966 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.226 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.226 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.226 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.226 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.226 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.227 "name": "raid_bdev1", 00:13:04.227 "uuid": "fe818eef-d802-4330-91bd-6b866c62d7fc", 00:13:04.227 "strip_size_kb": 64, 00:13:04.227 "state": "online", 00:13:04.227 "raid_level": "raid5f", 00:13:04.227 "superblock": true, 00:13:04.227 "num_base_bdevs": 3, 00:13:04.227 "num_base_bdevs_discovered": 2, 00:13:04.227 "num_base_bdevs_operational": 2, 00:13:04.227 "base_bdevs_list": [ 00:13:04.227 { 00:13:04.227 "name": null, 00:13:04.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.227 "is_configured": false, 00:13:04.227 "data_offset": 0, 00:13:04.227 "data_size": 63488 00:13:04.227 }, 00:13:04.227 { 00:13:04.227 "name": "BaseBdev2", 00:13:04.227 "uuid": "ff2f3848-d936-5682-8ccf-f0b1213dc1e5", 00:13:04.227 "is_configured": true, 00:13:04.227 "data_offset": 2048, 00:13:04.227 "data_size": 63488 00:13:04.227 }, 00:13:04.227 { 00:13:04.227 "name": "BaseBdev3", 00:13:04.227 "uuid": "58505254-4376-52b3-a96f-fb05163206e0", 00:13:04.227 "is_configured": true, 00:13:04.227 "data_offset": 2048, 00:13:04.227 "data_size": 63488 00:13:04.227 } 00:13:04.227 ] 00:13:04.227 }' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79765 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 79765 ']' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 79765 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79765 00:13:04.227 killing process with pid 79765 00:13:04.227 Received shutdown signal, test time was about 60.000000 seconds 00:13:04.227 00:13:04.227 Latency(us) 00:13:04.227 [2024-10-30T09:47:42.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.227 [2024-10-30T09:47:42.847Z] =================================================================================================================== 00:13:04.227 [2024-10-30T09:47:42.847Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79765' 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 79765 00:13:04.227 [2024-10-30 09:47:42.826592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.227 09:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 79765 00:13:04.227 [2024-10-30 09:47:42.826682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.227 [2024-10-30 09:47:42.826729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.227 [2024-10-30 09:47:42.826739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:04.485 [2024-10-30 09:47:43.018685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.054 09:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.054 00:13:05.054 real 0m19.764s 00:13:05.054 user 0m24.672s 00:13:05.054 sys 0m1.945s 00:13:05.054 09:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:05.054 09:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.054 ************************************ 00:13:05.054 END TEST raid5f_rebuild_test_sb 00:13:05.054 ************************************ 00:13:05.054 09:47:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:05.054 09:47:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:05.054 09:47:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:05.054 09:47:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:05.054 09:47:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.054 ************************************ 00:13:05.054 START TEST raid5f_state_function_test 00:13:05.054 ************************************ 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.054 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.055 Process raid pid: 80486 00:13:05.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80486 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80486' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80486 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80486 ']' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:05.055 09:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.315 [2024-10-30 09:47:43.683502] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:13:05.315 [2024-10-30 09:47:43.683738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.315 [2024-10-30 09:47:43.840158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.315 [2024-10-30 09:47:43.918675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.574 [2024-10-30 09:47:44.026950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.574 [2024-10-30 09:47:44.026975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.140 [2024-10-30 09:47:44.580337] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.140 [2024-10-30 09:47:44.580385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.140 [2024-10-30 09:47:44.580394] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.140 [2024-10-30 09:47:44.580403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.140 [2024-10-30 09:47:44.580408] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.140 [2024-10-30 09:47:44.580416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.140 [2024-10-30 09:47:44.580422] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.140 [2024-10-30 09:47:44.580430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.140 "name": "Existed_Raid", 00:13:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.140 "strip_size_kb": 64, 00:13:06.140 "state": "configuring", 00:13:06.140 "raid_level": "raid5f", 00:13:06.140 "superblock": false, 00:13:06.140 "num_base_bdevs": 4, 00:13:06.140 "num_base_bdevs_discovered": 0, 00:13:06.140 "num_base_bdevs_operational": 4, 00:13:06.140 "base_bdevs_list": [ 00:13:06.140 { 00:13:06.140 "name": "BaseBdev1", 00:13:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.140 "is_configured": false, 00:13:06.140 "data_offset": 0, 00:13:06.140 "data_size": 0 00:13:06.140 }, 00:13:06.140 { 00:13:06.140 "name": "BaseBdev2", 00:13:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.140 "is_configured": false, 00:13:06.140 "data_offset": 0, 00:13:06.140 "data_size": 0 00:13:06.140 }, 00:13:06.140 { 00:13:06.140 "name": "BaseBdev3", 00:13:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.140 "is_configured": false, 00:13:06.140 "data_offset": 0, 00:13:06.140 "data_size": 0 00:13:06.140 }, 00:13:06.140 { 00:13:06.140 "name": "BaseBdev4", 00:13:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.140 "is_configured": false, 00:13:06.140 "data_offset": 0, 00:13:06.140 "data_size": 0 00:13:06.140 } 00:13:06.140 ] 00:13:06.140 }' 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.140 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 [2024-10-30 09:47:44.916363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:06.399 [2024-10-30 09:47:44.916396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 [2024-10-30 09:47:44.924373] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.399 [2024-10-30 09:47:44.924408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.399 [2024-10-30 09:47:44.924414] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.399 [2024-10-30 09:47:44.924421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.399 [2024-10-30 09:47:44.924426] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.399 [2024-10-30 09:47:44.924433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.399 [2024-10-30 09:47:44.924438] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.399 [2024-10-30 09:47:44.924445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 [2024-10-30 09:47:44.951986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.399 BaseBdev1 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.399 [ 00:13:06.399 { 00:13:06.399 "name": "BaseBdev1", 00:13:06.399 "aliases": [ 00:13:06.399 "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3" 00:13:06.399 ], 00:13:06.399 "product_name": "Malloc disk", 00:13:06.399 "block_size": 512, 00:13:06.399 "num_blocks": 65536, 00:13:06.399 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:06.399 "assigned_rate_limits": { 00:13:06.399 "rw_ios_per_sec": 0, 00:13:06.399 "rw_mbytes_per_sec": 0, 00:13:06.399 "r_mbytes_per_sec": 0, 00:13:06.399 "w_mbytes_per_sec": 0 00:13:06.399 }, 00:13:06.399 "claimed": true, 00:13:06.399 "claim_type": "exclusive_write", 00:13:06.399 "zoned": false, 00:13:06.399 "supported_io_types": { 00:13:06.399 "read": true, 00:13:06.399 "write": true, 00:13:06.399 "unmap": true, 00:13:06.399 "flush": true, 00:13:06.399 "reset": true, 00:13:06.399 "nvme_admin": false, 00:13:06.399 "nvme_io": false, 00:13:06.399 "nvme_io_md": false, 00:13:06.399 "write_zeroes": true, 00:13:06.399 "zcopy": true, 00:13:06.399 "get_zone_info": false, 00:13:06.399 "zone_management": false, 00:13:06.399 "zone_append": false, 00:13:06.399 "compare": false, 00:13:06.399 "compare_and_write": false, 00:13:06.399 "abort": true, 00:13:06.399 "seek_hole": false, 00:13:06.399 "seek_data": false, 00:13:06.399 "copy": true, 00:13:06.399 "nvme_iov_md": false 00:13:06.399 }, 00:13:06.399 "memory_domains": [ 00:13:06.399 { 00:13:06.399 "dma_device_id": "system", 00:13:06.399 "dma_device_type": 1 00:13:06.399 }, 00:13:06.399 { 00:13:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.399 "dma_device_type": 2 00:13:06.399 } 00:13:06.399 ], 00:13:06.399 "driver_specific": {} 00:13:06.399 } 00:13:06.399 ] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.399 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 09:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.400 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.400 "name": "Existed_Raid", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "strip_size_kb": 64, 00:13:06.400 "state": "configuring", 00:13:06.400 "raid_level": "raid5f", 00:13:06.400 "superblock": false, 00:13:06.400 "num_base_bdevs": 4, 00:13:06.400 "num_base_bdevs_discovered": 1, 00:13:06.400 "num_base_bdevs_operational": 4, 00:13:06.400 "base_bdevs_list": [ 00:13:06.400 { 00:13:06.400 "name": "BaseBdev1", 00:13:06.400 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:06.400 "is_configured": true, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 65536 00:13:06.400 }, 00:13:06.400 { 00:13:06.400 "name": "BaseBdev2", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "is_configured": false, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 0 00:13:06.400 }, 00:13:06.400 { 00:13:06.400 "name": "BaseBdev3", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "is_configured": false, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 0 00:13:06.400 }, 00:13:06.400 { 00:13:06.400 "name": "BaseBdev4", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "is_configured": false, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 0 00:13:06.400 } 00:13:06.400 ] 00:13:06.400 }' 00:13:06.400 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.400 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.658 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:06.658 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.658 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 [2024-10-30 09:47:45.284093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:06.918 [2024-10-30 09:47:45.284225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 [2024-10-30 09:47:45.292137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.918 [2024-10-30 09:47:45.293649] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.918 [2024-10-30 09:47:45.293685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.918 [2024-10-30 09:47:45.293693] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.918 [2024-10-30 09:47:45.293701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.918 [2024-10-30 09:47:45.293706] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.918 [2024-10-30 09:47:45.293713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.918 "name": "Existed_Raid", 00:13:06.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.918 "strip_size_kb": 64, 00:13:06.918 "state": "configuring", 00:13:06.918 "raid_level": "raid5f", 00:13:06.918 "superblock": false, 00:13:06.918 "num_base_bdevs": 4, 00:13:06.918 "num_base_bdevs_discovered": 1, 00:13:06.918 "num_base_bdevs_operational": 4, 00:13:06.918 "base_bdevs_list": [ 00:13:06.918 { 00:13:06.918 "name": "BaseBdev1", 00:13:06.918 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:06.918 "is_configured": true, 00:13:06.918 "data_offset": 0, 00:13:06.918 "data_size": 65536 00:13:06.918 }, 00:13:06.918 { 00:13:06.918 "name": "BaseBdev2", 00:13:06.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.918 "is_configured": false, 00:13:06.918 "data_offset": 0, 00:13:06.918 "data_size": 0 00:13:06.918 }, 00:13:06.918 { 00:13:06.918 "name": "BaseBdev3", 00:13:06.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.918 "is_configured": false, 00:13:06.918 "data_offset": 0, 00:13:06.918 "data_size": 0 00:13:06.918 }, 00:13:06.918 { 00:13:06.918 "name": "BaseBdev4", 00:13:06.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.918 "is_configured": false, 00:13:06.918 "data_offset": 0, 00:13:06.918 "data_size": 0 00:13:06.918 } 00:13:06.918 ] 00:13:06.918 }' 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.918 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.178 [2024-10-30 09:47:45.634054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.178 BaseBdev2 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.178 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.178 [ 00:13:07.178 { 00:13:07.178 "name": "BaseBdev2", 00:13:07.178 "aliases": [ 00:13:07.178 "e606e6fe-9958-4e86-b1dd-c92ff09d39e1" 00:13:07.178 ], 00:13:07.178 "product_name": "Malloc disk", 00:13:07.178 "block_size": 512, 00:13:07.178 "num_blocks": 65536, 00:13:07.178 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:07.178 "assigned_rate_limits": { 00:13:07.178 "rw_ios_per_sec": 0, 00:13:07.178 "rw_mbytes_per_sec": 0, 00:13:07.178 "r_mbytes_per_sec": 0, 00:13:07.178 "w_mbytes_per_sec": 0 00:13:07.178 }, 00:13:07.178 "claimed": true, 00:13:07.178 "claim_type": "exclusive_write", 00:13:07.178 "zoned": false, 00:13:07.178 "supported_io_types": { 00:13:07.178 "read": true, 00:13:07.178 "write": true, 00:13:07.178 "unmap": true, 00:13:07.178 "flush": true, 00:13:07.178 "reset": true, 00:13:07.178 "nvme_admin": false, 00:13:07.178 "nvme_io": false, 00:13:07.178 "nvme_io_md": false, 00:13:07.178 "write_zeroes": true, 00:13:07.178 "zcopy": true, 00:13:07.178 "get_zone_info": false, 00:13:07.178 "zone_management": false, 00:13:07.178 "zone_append": false, 00:13:07.178 "compare": false, 00:13:07.178 "compare_and_write": false, 00:13:07.178 "abort": true, 00:13:07.178 "seek_hole": false, 00:13:07.178 "seek_data": false, 00:13:07.178 "copy": true, 00:13:07.178 "nvme_iov_md": false 00:13:07.178 }, 00:13:07.178 "memory_domains": [ 00:13:07.178 { 00:13:07.178 "dma_device_id": "system", 00:13:07.178 "dma_device_type": 1 00:13:07.178 }, 00:13:07.178 { 00:13:07.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.178 "dma_device_type": 2 00:13:07.178 } 00:13:07.178 ], 00:13:07.178 "driver_specific": {} 00:13:07.178 } 00:13:07.178 ] 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.179 "name": "Existed_Raid", 00:13:07.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.179 "strip_size_kb": 64, 00:13:07.179 "state": "configuring", 00:13:07.179 "raid_level": "raid5f", 00:13:07.179 "superblock": false, 00:13:07.179 "num_base_bdevs": 4, 00:13:07.179 "num_base_bdevs_discovered": 2, 00:13:07.179 "num_base_bdevs_operational": 4, 00:13:07.179 "base_bdevs_list": [ 00:13:07.179 { 00:13:07.179 "name": "BaseBdev1", 00:13:07.179 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:07.179 "is_configured": true, 00:13:07.179 "data_offset": 0, 00:13:07.179 "data_size": 65536 00:13:07.179 }, 00:13:07.179 { 00:13:07.179 "name": "BaseBdev2", 00:13:07.179 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:07.179 "is_configured": true, 00:13:07.179 "data_offset": 0, 00:13:07.179 "data_size": 65536 00:13:07.179 }, 00:13:07.179 { 00:13:07.179 "name": "BaseBdev3", 00:13:07.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.179 "is_configured": false, 00:13:07.179 "data_offset": 0, 00:13:07.179 "data_size": 0 00:13:07.179 }, 00:13:07.179 { 00:13:07.179 "name": "BaseBdev4", 00:13:07.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.179 "is_configured": false, 00:13:07.179 "data_offset": 0, 00:13:07.179 "data_size": 0 00:13:07.179 } 00:13:07.179 ] 00:13:07.179 }' 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.179 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.438 09:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:07.438 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.438 09:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.438 [2024-10-30 09:47:46.017680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.438 BaseBdev3 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:07.438 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.439 [ 00:13:07.439 { 00:13:07.439 "name": "BaseBdev3", 00:13:07.439 "aliases": [ 00:13:07.439 "6e5221f0-f839-45d6-8b19-e98615a1b7b2" 00:13:07.439 ], 00:13:07.439 "product_name": "Malloc disk", 00:13:07.439 "block_size": 512, 00:13:07.439 "num_blocks": 65536, 00:13:07.439 "uuid": "6e5221f0-f839-45d6-8b19-e98615a1b7b2", 00:13:07.439 "assigned_rate_limits": { 00:13:07.439 "rw_ios_per_sec": 0, 00:13:07.439 "rw_mbytes_per_sec": 0, 00:13:07.439 "r_mbytes_per_sec": 0, 00:13:07.439 "w_mbytes_per_sec": 0 00:13:07.439 }, 00:13:07.439 "claimed": true, 00:13:07.439 "claim_type": "exclusive_write", 00:13:07.439 "zoned": false, 00:13:07.439 "supported_io_types": { 00:13:07.439 "read": true, 00:13:07.439 "write": true, 00:13:07.439 "unmap": true, 00:13:07.439 "flush": true, 00:13:07.439 "reset": true, 00:13:07.439 "nvme_admin": false, 00:13:07.439 "nvme_io": false, 00:13:07.439 "nvme_io_md": false, 00:13:07.439 "write_zeroes": true, 00:13:07.439 "zcopy": true, 00:13:07.439 "get_zone_info": false, 00:13:07.439 "zone_management": false, 00:13:07.439 "zone_append": false, 00:13:07.439 "compare": false, 00:13:07.439 "compare_and_write": false, 00:13:07.439 "abort": true, 00:13:07.439 "seek_hole": false, 00:13:07.439 "seek_data": false, 00:13:07.439 "copy": true, 00:13:07.439 "nvme_iov_md": false 00:13:07.439 }, 00:13:07.439 "memory_domains": [ 00:13:07.439 { 00:13:07.439 "dma_device_id": "system", 00:13:07.439 "dma_device_type": 1 00:13:07.439 }, 00:13:07.439 { 00:13:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.439 "dma_device_type": 2 00:13:07.439 } 00:13:07.439 ], 00:13:07.439 "driver_specific": {} 00:13:07.439 } 00:13:07.439 ] 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.439 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.698 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.698 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.698 "name": "Existed_Raid", 00:13:07.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.698 "strip_size_kb": 64, 00:13:07.698 "state": "configuring", 00:13:07.698 "raid_level": "raid5f", 00:13:07.698 "superblock": false, 00:13:07.698 "num_base_bdevs": 4, 00:13:07.698 "num_base_bdevs_discovered": 3, 00:13:07.698 "num_base_bdevs_operational": 4, 00:13:07.698 "base_bdevs_list": [ 00:13:07.698 { 00:13:07.698 "name": "BaseBdev1", 00:13:07.698 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:07.698 "is_configured": true, 00:13:07.698 "data_offset": 0, 00:13:07.698 "data_size": 65536 00:13:07.698 }, 00:13:07.698 { 00:13:07.698 "name": "BaseBdev2", 00:13:07.698 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:07.698 "is_configured": true, 00:13:07.698 "data_offset": 0, 00:13:07.698 "data_size": 65536 00:13:07.698 }, 00:13:07.698 { 00:13:07.698 "name": "BaseBdev3", 00:13:07.698 "uuid": "6e5221f0-f839-45d6-8b19-e98615a1b7b2", 00:13:07.698 "is_configured": true, 00:13:07.698 "data_offset": 0, 00:13:07.698 "data_size": 65536 00:13:07.698 }, 00:13:07.698 { 00:13:07.698 "name": "BaseBdev4", 00:13:07.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.698 "is_configured": false, 00:13:07.698 "data_offset": 0, 00:13:07.698 "data_size": 0 00:13:07.698 } 00:13:07.698 ] 00:13:07.698 }' 00:13:07.698 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.698 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 [2024-10-30 09:47:46.371742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:07.957 [2024-10-30 09:47:46.371903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:07.957 [2024-10-30 09:47:46.371930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:07.957 [2024-10-30 09:47:46.372211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:07.957 [2024-10-30 09:47:46.376182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:07.957 [2024-10-30 09:47:46.376271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:07.957 [2024-10-30 09:47:46.376533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.957 BaseBdev4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 [ 00:13:07.957 { 00:13:07.957 "name": "BaseBdev4", 00:13:07.957 "aliases": [ 00:13:07.957 "e91bda58-b4b2-48c6-a654-25db8337b883" 00:13:07.957 ], 00:13:07.957 "product_name": "Malloc disk", 00:13:07.957 "block_size": 512, 00:13:07.957 "num_blocks": 65536, 00:13:07.957 "uuid": "e91bda58-b4b2-48c6-a654-25db8337b883", 00:13:07.957 "assigned_rate_limits": { 00:13:07.957 "rw_ios_per_sec": 0, 00:13:07.957 "rw_mbytes_per_sec": 0, 00:13:07.957 "r_mbytes_per_sec": 0, 00:13:07.957 "w_mbytes_per_sec": 0 00:13:07.957 }, 00:13:07.957 "claimed": true, 00:13:07.957 "claim_type": "exclusive_write", 00:13:07.957 "zoned": false, 00:13:07.957 "supported_io_types": { 00:13:07.957 "read": true, 00:13:07.957 "write": true, 00:13:07.957 "unmap": true, 00:13:07.957 "flush": true, 00:13:07.957 "reset": true, 00:13:07.957 "nvme_admin": false, 00:13:07.957 "nvme_io": false, 00:13:07.957 "nvme_io_md": false, 00:13:07.957 "write_zeroes": true, 00:13:07.957 "zcopy": true, 00:13:07.957 "get_zone_info": false, 00:13:07.957 "zone_management": false, 00:13:07.957 "zone_append": false, 00:13:07.957 "compare": false, 00:13:07.957 "compare_and_write": false, 00:13:07.957 "abort": true, 00:13:07.957 "seek_hole": false, 00:13:07.957 "seek_data": false, 00:13:07.957 "copy": true, 00:13:07.957 "nvme_iov_md": false 00:13:07.957 }, 00:13:07.957 "memory_domains": [ 00:13:07.957 { 00:13:07.957 "dma_device_id": "system", 00:13:07.957 "dma_device_type": 1 00:13:07.957 }, 00:13:07.957 { 00:13:07.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.957 "dma_device_type": 2 00:13:07.957 } 00:13:07.957 ], 00:13:07.957 "driver_specific": {} 00:13:07.957 } 00:13:07.957 ] 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.958 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.958 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.958 "name": "Existed_Raid", 00:13:07.958 "uuid": "9ff07bee-a12f-4a6d-ae08-3ca6b9ea15b3", 00:13:07.958 "strip_size_kb": 64, 00:13:07.958 "state": "online", 00:13:07.958 "raid_level": "raid5f", 00:13:07.958 "superblock": false, 00:13:07.958 "num_base_bdevs": 4, 00:13:07.958 "num_base_bdevs_discovered": 4, 00:13:07.958 "num_base_bdevs_operational": 4, 00:13:07.958 "base_bdevs_list": [ 00:13:07.958 { 00:13:07.958 "name": "BaseBdev1", 00:13:07.958 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:07.958 "is_configured": true, 00:13:07.958 "data_offset": 0, 00:13:07.958 "data_size": 65536 00:13:07.958 }, 00:13:07.958 { 00:13:07.958 "name": "BaseBdev2", 00:13:07.958 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:07.958 "is_configured": true, 00:13:07.958 "data_offset": 0, 00:13:07.958 "data_size": 65536 00:13:07.958 }, 00:13:07.958 { 00:13:07.958 "name": "BaseBdev3", 00:13:07.958 "uuid": "6e5221f0-f839-45d6-8b19-e98615a1b7b2", 00:13:07.958 "is_configured": true, 00:13:07.958 "data_offset": 0, 00:13:07.958 "data_size": 65536 00:13:07.958 }, 00:13:07.958 { 00:13:07.958 "name": "BaseBdev4", 00:13:07.958 "uuid": "e91bda58-b4b2-48c6-a654-25db8337b883", 00:13:07.958 "is_configured": true, 00:13:07.958 "data_offset": 0, 00:13:07.958 "data_size": 65536 00:13:07.958 } 00:13:07.958 ] 00:13:07.958 }' 00:13:07.958 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.958 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.217 [2024-10-30 09:47:46.725013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.217 "name": "Existed_Raid", 00:13:08.217 "aliases": [ 00:13:08.217 "9ff07bee-a12f-4a6d-ae08-3ca6b9ea15b3" 00:13:08.217 ], 00:13:08.217 "product_name": "Raid Volume", 00:13:08.217 "block_size": 512, 00:13:08.217 "num_blocks": 196608, 00:13:08.217 "uuid": "9ff07bee-a12f-4a6d-ae08-3ca6b9ea15b3", 00:13:08.217 "assigned_rate_limits": { 00:13:08.217 "rw_ios_per_sec": 0, 00:13:08.217 "rw_mbytes_per_sec": 0, 00:13:08.217 "r_mbytes_per_sec": 0, 00:13:08.217 "w_mbytes_per_sec": 0 00:13:08.217 }, 00:13:08.217 "claimed": false, 00:13:08.217 "zoned": false, 00:13:08.217 "supported_io_types": { 00:13:08.217 "read": true, 00:13:08.217 "write": true, 00:13:08.217 "unmap": false, 00:13:08.217 "flush": false, 00:13:08.217 "reset": true, 00:13:08.217 "nvme_admin": false, 00:13:08.217 "nvme_io": false, 00:13:08.217 "nvme_io_md": false, 00:13:08.217 "write_zeroes": true, 00:13:08.217 "zcopy": false, 00:13:08.217 "get_zone_info": false, 00:13:08.217 "zone_management": false, 00:13:08.217 "zone_append": false, 00:13:08.217 "compare": false, 00:13:08.217 "compare_and_write": false, 00:13:08.217 "abort": false, 00:13:08.217 "seek_hole": false, 00:13:08.217 "seek_data": false, 00:13:08.217 "copy": false, 00:13:08.217 "nvme_iov_md": false 00:13:08.217 }, 00:13:08.217 "driver_specific": { 00:13:08.217 "raid": { 00:13:08.217 "uuid": "9ff07bee-a12f-4a6d-ae08-3ca6b9ea15b3", 00:13:08.217 "strip_size_kb": 64, 00:13:08.217 "state": "online", 00:13:08.217 "raid_level": "raid5f", 00:13:08.217 "superblock": false, 00:13:08.217 "num_base_bdevs": 4, 00:13:08.217 "num_base_bdevs_discovered": 4, 00:13:08.217 "num_base_bdevs_operational": 4, 00:13:08.217 "base_bdevs_list": [ 00:13:08.217 { 00:13:08.217 "name": "BaseBdev1", 00:13:08.217 "uuid": "4644a6cf-41bd-4e7b-b23d-2ddce4d289b3", 00:13:08.217 "is_configured": true, 00:13:08.217 "data_offset": 0, 00:13:08.217 "data_size": 65536 00:13:08.217 }, 00:13:08.217 { 00:13:08.217 "name": "BaseBdev2", 00:13:08.217 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:08.217 "is_configured": true, 00:13:08.217 "data_offset": 0, 00:13:08.217 "data_size": 65536 00:13:08.217 }, 00:13:08.217 { 00:13:08.217 "name": "BaseBdev3", 00:13:08.217 "uuid": "6e5221f0-f839-45d6-8b19-e98615a1b7b2", 00:13:08.217 "is_configured": true, 00:13:08.217 "data_offset": 0, 00:13:08.217 "data_size": 65536 00:13:08.217 }, 00:13:08.217 { 00:13:08.217 "name": "BaseBdev4", 00:13:08.217 "uuid": "e91bda58-b4b2-48c6-a654-25db8337b883", 00:13:08.217 "is_configured": true, 00:13:08.217 "data_offset": 0, 00:13:08.217 "data_size": 65536 00:13:08.217 } 00:13:08.217 ] 00:13:08.217 } 00:13:08.217 } 00:13:08.217 }' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:08.217 BaseBdev2 00:13:08.217 BaseBdev3 00:13:08.217 BaseBdev4' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.217 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.218 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.477 [2024-10-30 09:47:46.928886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:08.477 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.478 09:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.478 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.478 "name": "Existed_Raid", 00:13:08.478 "uuid": "9ff07bee-a12f-4a6d-ae08-3ca6b9ea15b3", 00:13:08.478 "strip_size_kb": 64, 00:13:08.478 "state": "online", 00:13:08.478 "raid_level": "raid5f", 00:13:08.478 "superblock": false, 00:13:08.478 "num_base_bdevs": 4, 00:13:08.478 "num_base_bdevs_discovered": 3, 00:13:08.478 "num_base_bdevs_operational": 3, 00:13:08.478 "base_bdevs_list": [ 00:13:08.478 { 00:13:08.478 "name": null, 00:13:08.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.478 "is_configured": false, 00:13:08.478 "data_offset": 0, 00:13:08.478 "data_size": 65536 00:13:08.478 }, 00:13:08.478 { 00:13:08.478 "name": "BaseBdev2", 00:13:08.478 "uuid": "e606e6fe-9958-4e86-b1dd-c92ff09d39e1", 00:13:08.478 "is_configured": true, 00:13:08.478 "data_offset": 0, 00:13:08.478 "data_size": 65536 00:13:08.478 }, 00:13:08.478 { 00:13:08.478 "name": "BaseBdev3", 00:13:08.478 "uuid": "6e5221f0-f839-45d6-8b19-e98615a1b7b2", 00:13:08.478 "is_configured": true, 00:13:08.478 "data_offset": 0, 00:13:08.478 "data_size": 65536 00:13:08.478 }, 00:13:08.478 { 00:13:08.478 "name": "BaseBdev4", 00:13:08.478 "uuid": "e91bda58-b4b2-48c6-a654-25db8337b883", 00:13:08.478 "is_configured": true, 00:13:08.478 "data_offset": 0, 00:13:08.478 "data_size": 65536 00:13:08.478 } 00:13:08.478 ] 00:13:08.478 }' 00:13:08.478 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.478 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.737 [2024-10-30 09:47:47.310094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:08.737 [2024-10-30 09:47:47.310174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.737 [2024-10-30 09:47:47.355564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.737 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.995 [2024-10-30 09:47:47.395608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.995 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 [2024-10-30 09:47:47.477778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:08.996 [2024-10-30 09:47:47.477819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 BaseBdev2 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.996 [ 00:13:08.996 { 00:13:08.996 "name": "BaseBdev2", 00:13:08.996 "aliases": [ 00:13:08.996 "fffa86b8-c1e7-4e78-bffb-1c3568bc4629" 00:13:08.996 ], 00:13:08.996 "product_name": "Malloc disk", 00:13:08.996 "block_size": 512, 00:13:08.996 "num_blocks": 65536, 00:13:08.996 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:08.996 "assigned_rate_limits": { 00:13:08.996 "rw_ios_per_sec": 0, 00:13:08.996 "rw_mbytes_per_sec": 0, 00:13:08.996 "r_mbytes_per_sec": 0, 00:13:08.996 "w_mbytes_per_sec": 0 00:13:08.996 }, 00:13:08.996 "claimed": false, 00:13:08.996 "zoned": false, 00:13:08.996 "supported_io_types": { 00:13:08.996 "read": true, 00:13:08.996 "write": true, 00:13:08.996 "unmap": true, 00:13:08.996 "flush": true, 00:13:08.996 "reset": true, 00:13:08.996 "nvme_admin": false, 00:13:08.996 "nvme_io": false, 00:13:08.996 "nvme_io_md": false, 00:13:08.996 "write_zeroes": true, 00:13:08.996 "zcopy": true, 00:13:08.996 "get_zone_info": false, 00:13:08.996 "zone_management": false, 00:13:08.996 "zone_append": false, 00:13:08.996 "compare": false, 00:13:08.996 "compare_and_write": false, 00:13:08.996 "abort": true, 00:13:08.996 "seek_hole": false, 00:13:08.996 "seek_data": false, 00:13:08.996 "copy": true, 00:13:08.996 "nvme_iov_md": false 00:13:08.996 }, 00:13:08.996 "memory_domains": [ 00:13:08.996 { 00:13:08.996 "dma_device_id": "system", 00:13:08.996 "dma_device_type": 1 00:13:08.996 }, 00:13:08.996 { 00:13:08.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.996 "dma_device_type": 2 00:13:08.996 } 00:13:08.996 ], 00:13:08.996 "driver_specific": {} 00:13:08.996 } 00:13:08.996 ] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.996 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.255 BaseBdev3 00:13:09.255 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 [ 00:13:09.256 { 00:13:09.256 "name": "BaseBdev3", 00:13:09.256 "aliases": [ 00:13:09.256 "5dff6895-f6f0-4624-b8f5-1b465a0e05d7" 00:13:09.256 ], 00:13:09.256 "product_name": "Malloc disk", 00:13:09.256 "block_size": 512, 00:13:09.256 "num_blocks": 65536, 00:13:09.256 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:09.256 "assigned_rate_limits": { 00:13:09.256 "rw_ios_per_sec": 0, 00:13:09.256 "rw_mbytes_per_sec": 0, 00:13:09.256 "r_mbytes_per_sec": 0, 00:13:09.256 "w_mbytes_per_sec": 0 00:13:09.256 }, 00:13:09.256 "claimed": false, 00:13:09.256 "zoned": false, 00:13:09.256 "supported_io_types": { 00:13:09.256 "read": true, 00:13:09.256 "write": true, 00:13:09.256 "unmap": true, 00:13:09.256 "flush": true, 00:13:09.256 "reset": true, 00:13:09.256 "nvme_admin": false, 00:13:09.256 "nvme_io": false, 00:13:09.256 "nvme_io_md": false, 00:13:09.256 "write_zeroes": true, 00:13:09.256 "zcopy": true, 00:13:09.256 "get_zone_info": false, 00:13:09.256 "zone_management": false, 00:13:09.256 "zone_append": false, 00:13:09.256 "compare": false, 00:13:09.256 "compare_and_write": false, 00:13:09.256 "abort": true, 00:13:09.256 "seek_hole": false, 00:13:09.256 "seek_data": false, 00:13:09.256 "copy": true, 00:13:09.256 "nvme_iov_md": false 00:13:09.256 }, 00:13:09.256 "memory_domains": [ 00:13:09.256 { 00:13:09.256 "dma_device_id": "system", 00:13:09.256 "dma_device_type": 1 00:13:09.256 }, 00:13:09.256 { 00:13:09.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.256 "dma_device_type": 2 00:13:09.256 } 00:13:09.256 ], 00:13:09.256 "driver_specific": {} 00:13:09.256 } 00:13:09.256 ] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 BaseBdev4 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.256 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 [ 00:13:09.256 { 00:13:09.256 "name": "BaseBdev4", 00:13:09.256 "aliases": [ 00:13:09.256 "3546c354-dd94-4ae7-9fad-572764a69fc9" 00:13:09.256 ], 00:13:09.256 "product_name": "Malloc disk", 00:13:09.256 "block_size": 512, 00:13:09.256 "num_blocks": 65536, 00:13:09.256 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:09.256 "assigned_rate_limits": { 00:13:09.256 "rw_ios_per_sec": 0, 00:13:09.256 "rw_mbytes_per_sec": 0, 00:13:09.256 "r_mbytes_per_sec": 0, 00:13:09.256 "w_mbytes_per_sec": 0 00:13:09.256 }, 00:13:09.256 "claimed": false, 00:13:09.256 "zoned": false, 00:13:09.256 "supported_io_types": { 00:13:09.256 "read": true, 00:13:09.256 "write": true, 00:13:09.256 "unmap": true, 00:13:09.257 "flush": true, 00:13:09.257 "reset": true, 00:13:09.257 "nvme_admin": false, 00:13:09.257 "nvme_io": false, 00:13:09.257 "nvme_io_md": false, 00:13:09.257 "write_zeroes": true, 00:13:09.257 "zcopy": true, 00:13:09.257 "get_zone_info": false, 00:13:09.257 "zone_management": false, 00:13:09.257 "zone_append": false, 00:13:09.257 "compare": false, 00:13:09.257 "compare_and_write": false, 00:13:09.257 "abort": true, 00:13:09.257 "seek_hole": false, 00:13:09.257 "seek_data": false, 00:13:09.257 "copy": true, 00:13:09.257 "nvme_iov_md": false 00:13:09.257 }, 00:13:09.257 "memory_domains": [ 00:13:09.257 { 00:13:09.257 "dma_device_id": "system", 00:13:09.257 "dma_device_type": 1 00:13:09.257 }, 00:13:09.257 { 00:13:09.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.257 "dma_device_type": 2 00:13:09.257 } 00:13:09.257 ], 00:13:09.257 "driver_specific": {} 00:13:09.257 } 00:13:09.257 ] 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.257 [2024-10-30 09:47:47.710781] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.257 [2024-10-30 09:47:47.710819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.257 [2024-10-30 09:47:47.710835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.257 [2024-10-30 09:47:47.712317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.257 [2024-10-30 09:47:47.712359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.257 "name": "Existed_Raid", 00:13:09.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.257 "strip_size_kb": 64, 00:13:09.257 "state": "configuring", 00:13:09.257 "raid_level": "raid5f", 00:13:09.257 "superblock": false, 00:13:09.257 "num_base_bdevs": 4, 00:13:09.257 "num_base_bdevs_discovered": 3, 00:13:09.257 "num_base_bdevs_operational": 4, 00:13:09.257 "base_bdevs_list": [ 00:13:09.257 { 00:13:09.257 "name": "BaseBdev1", 00:13:09.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.257 "is_configured": false, 00:13:09.257 "data_offset": 0, 00:13:09.257 "data_size": 0 00:13:09.257 }, 00:13:09.257 { 00:13:09.257 "name": "BaseBdev2", 00:13:09.257 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:09.257 "is_configured": true, 00:13:09.257 "data_offset": 0, 00:13:09.257 "data_size": 65536 00:13:09.257 }, 00:13:09.257 { 00:13:09.257 "name": "BaseBdev3", 00:13:09.257 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:09.257 "is_configured": true, 00:13:09.257 "data_offset": 0, 00:13:09.257 "data_size": 65536 00:13:09.257 }, 00:13:09.257 { 00:13:09.257 "name": "BaseBdev4", 00:13:09.257 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:09.257 "is_configured": true, 00:13:09.257 "data_offset": 0, 00:13:09.257 "data_size": 65536 00:13:09.257 } 00:13:09.257 ] 00:13:09.257 }' 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.257 09:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.516 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.517 [2024-10-30 09:47:48.030837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.517 "name": "Existed_Raid", 00:13:09.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.517 "strip_size_kb": 64, 00:13:09.517 "state": "configuring", 00:13:09.517 "raid_level": "raid5f", 00:13:09.517 "superblock": false, 00:13:09.517 "num_base_bdevs": 4, 00:13:09.517 "num_base_bdevs_discovered": 2, 00:13:09.517 "num_base_bdevs_operational": 4, 00:13:09.517 "base_bdevs_list": [ 00:13:09.517 { 00:13:09.517 "name": "BaseBdev1", 00:13:09.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.517 "is_configured": false, 00:13:09.517 "data_offset": 0, 00:13:09.517 "data_size": 0 00:13:09.517 }, 00:13:09.517 { 00:13:09.517 "name": null, 00:13:09.517 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:09.517 "is_configured": false, 00:13:09.517 "data_offset": 0, 00:13:09.517 "data_size": 65536 00:13:09.517 }, 00:13:09.517 { 00:13:09.517 "name": "BaseBdev3", 00:13:09.517 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:09.517 "is_configured": true, 00:13:09.517 "data_offset": 0, 00:13:09.517 "data_size": 65536 00:13:09.517 }, 00:13:09.517 { 00:13:09.517 "name": "BaseBdev4", 00:13:09.517 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:09.517 "is_configured": true, 00:13:09.517 "data_offset": 0, 00:13:09.517 "data_size": 65536 00:13:09.517 } 00:13:09.517 ] 00:13:09.517 }' 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.517 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.083 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.084 [2024-10-30 09:47:48.452996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.084 BaseBdev1 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.084 [ 00:13:10.084 { 00:13:10.084 "name": "BaseBdev1", 00:13:10.084 "aliases": [ 00:13:10.084 "ba1268d9-ec81-4f1f-859b-f8aec3b999b7" 00:13:10.084 ], 00:13:10.084 "product_name": "Malloc disk", 00:13:10.084 "block_size": 512, 00:13:10.084 "num_blocks": 65536, 00:13:10.084 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:10.084 "assigned_rate_limits": { 00:13:10.084 "rw_ios_per_sec": 0, 00:13:10.084 "rw_mbytes_per_sec": 0, 00:13:10.084 "r_mbytes_per_sec": 0, 00:13:10.084 "w_mbytes_per_sec": 0 00:13:10.084 }, 00:13:10.084 "claimed": true, 00:13:10.084 "claim_type": "exclusive_write", 00:13:10.084 "zoned": false, 00:13:10.084 "supported_io_types": { 00:13:10.084 "read": true, 00:13:10.084 "write": true, 00:13:10.084 "unmap": true, 00:13:10.084 "flush": true, 00:13:10.084 "reset": true, 00:13:10.084 "nvme_admin": false, 00:13:10.084 "nvme_io": false, 00:13:10.084 "nvme_io_md": false, 00:13:10.084 "write_zeroes": true, 00:13:10.084 "zcopy": true, 00:13:10.084 "get_zone_info": false, 00:13:10.084 "zone_management": false, 00:13:10.084 "zone_append": false, 00:13:10.084 "compare": false, 00:13:10.084 "compare_and_write": false, 00:13:10.084 "abort": true, 00:13:10.084 "seek_hole": false, 00:13:10.084 "seek_data": false, 00:13:10.084 "copy": true, 00:13:10.084 "nvme_iov_md": false 00:13:10.084 }, 00:13:10.084 "memory_domains": [ 00:13:10.084 { 00:13:10.084 "dma_device_id": "system", 00:13:10.084 "dma_device_type": 1 00:13:10.084 }, 00:13:10.084 { 00:13:10.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.084 "dma_device_type": 2 00:13:10.084 } 00:13:10.084 ], 00:13:10.084 "driver_specific": {} 00:13:10.084 } 00:13:10.084 ] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.084 "name": "Existed_Raid", 00:13:10.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.084 "strip_size_kb": 64, 00:13:10.084 "state": "configuring", 00:13:10.084 "raid_level": "raid5f", 00:13:10.084 "superblock": false, 00:13:10.084 "num_base_bdevs": 4, 00:13:10.084 "num_base_bdevs_discovered": 3, 00:13:10.084 "num_base_bdevs_operational": 4, 00:13:10.084 "base_bdevs_list": [ 00:13:10.084 { 00:13:10.084 "name": "BaseBdev1", 00:13:10.084 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:10.084 "is_configured": true, 00:13:10.084 "data_offset": 0, 00:13:10.084 "data_size": 65536 00:13:10.084 }, 00:13:10.084 { 00:13:10.084 "name": null, 00:13:10.084 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:10.084 "is_configured": false, 00:13:10.084 "data_offset": 0, 00:13:10.084 "data_size": 65536 00:13:10.084 }, 00:13:10.084 { 00:13:10.084 "name": "BaseBdev3", 00:13:10.084 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:10.084 "is_configured": true, 00:13:10.084 "data_offset": 0, 00:13:10.084 "data_size": 65536 00:13:10.084 }, 00:13:10.084 { 00:13:10.084 "name": "BaseBdev4", 00:13:10.084 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:10.084 "is_configured": true, 00:13:10.084 "data_offset": 0, 00:13:10.084 "data_size": 65536 00:13:10.084 } 00:13:10.084 ] 00:13:10.084 }' 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.084 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.393 [2024-10-30 09:47:48.825129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.393 "name": "Existed_Raid", 00:13:10.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.393 "strip_size_kb": 64, 00:13:10.393 "state": "configuring", 00:13:10.393 "raid_level": "raid5f", 00:13:10.393 "superblock": false, 00:13:10.393 "num_base_bdevs": 4, 00:13:10.393 "num_base_bdevs_discovered": 2, 00:13:10.393 "num_base_bdevs_operational": 4, 00:13:10.393 "base_bdevs_list": [ 00:13:10.393 { 00:13:10.393 "name": "BaseBdev1", 00:13:10.393 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:10.393 "is_configured": true, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 65536 00:13:10.393 }, 00:13:10.393 { 00:13:10.393 "name": null, 00:13:10.393 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:10.393 "is_configured": false, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 65536 00:13:10.393 }, 00:13:10.393 { 00:13:10.393 "name": null, 00:13:10.393 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:10.393 "is_configured": false, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 65536 00:13:10.393 }, 00:13:10.393 { 00:13:10.393 "name": "BaseBdev4", 00:13:10.393 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:10.393 "is_configured": true, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 65536 00:13:10.393 } 00:13:10.393 ] 00:13:10.393 }' 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.393 09:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.651 [2024-10-30 09:47:49.197199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.651 "name": "Existed_Raid", 00:13:10.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.651 "strip_size_kb": 64, 00:13:10.651 "state": "configuring", 00:13:10.651 "raid_level": "raid5f", 00:13:10.651 "superblock": false, 00:13:10.651 "num_base_bdevs": 4, 00:13:10.651 "num_base_bdevs_discovered": 3, 00:13:10.651 "num_base_bdevs_operational": 4, 00:13:10.651 "base_bdevs_list": [ 00:13:10.651 { 00:13:10.651 "name": "BaseBdev1", 00:13:10.651 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:10.651 "is_configured": true, 00:13:10.651 "data_offset": 0, 00:13:10.651 "data_size": 65536 00:13:10.651 }, 00:13:10.651 { 00:13:10.651 "name": null, 00:13:10.651 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:10.651 "is_configured": false, 00:13:10.651 "data_offset": 0, 00:13:10.651 "data_size": 65536 00:13:10.651 }, 00:13:10.651 { 00:13:10.651 "name": "BaseBdev3", 00:13:10.651 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:10.651 "is_configured": true, 00:13:10.651 "data_offset": 0, 00:13:10.651 "data_size": 65536 00:13:10.651 }, 00:13:10.651 { 00:13:10.651 "name": "BaseBdev4", 00:13:10.651 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:10.651 "is_configured": true, 00:13:10.651 "data_offset": 0, 00:13:10.651 "data_size": 65536 00:13:10.651 } 00:13:10.651 ] 00:13:10.651 }' 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.651 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.909 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.909 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.909 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.909 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.174 [2024-10-30 09:47:49.561284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.174 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.174 "name": "Existed_Raid", 00:13:11.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.174 "strip_size_kb": 64, 00:13:11.174 "state": "configuring", 00:13:11.174 "raid_level": "raid5f", 00:13:11.174 "superblock": false, 00:13:11.174 "num_base_bdevs": 4, 00:13:11.174 "num_base_bdevs_discovered": 2, 00:13:11.174 "num_base_bdevs_operational": 4, 00:13:11.174 "base_bdevs_list": [ 00:13:11.174 { 00:13:11.174 "name": null, 00:13:11.174 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:11.174 "is_configured": false, 00:13:11.174 "data_offset": 0, 00:13:11.174 "data_size": 65536 00:13:11.174 }, 00:13:11.174 { 00:13:11.174 "name": null, 00:13:11.174 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:11.174 "is_configured": false, 00:13:11.174 "data_offset": 0, 00:13:11.174 "data_size": 65536 00:13:11.174 }, 00:13:11.174 { 00:13:11.174 "name": "BaseBdev3", 00:13:11.174 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:11.174 "is_configured": true, 00:13:11.174 "data_offset": 0, 00:13:11.174 "data_size": 65536 00:13:11.174 }, 00:13:11.174 { 00:13:11.174 "name": "BaseBdev4", 00:13:11.174 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:11.174 "is_configured": true, 00:13:11.175 "data_offset": 0, 00:13:11.175 "data_size": 65536 00:13:11.175 } 00:13:11.175 ] 00:13:11.175 }' 00:13:11.175 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.175 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.433 [2024-10-30 09:47:49.950528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.433 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.434 "name": "Existed_Raid", 00:13:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.434 "strip_size_kb": 64, 00:13:11.434 "state": "configuring", 00:13:11.434 "raid_level": "raid5f", 00:13:11.434 "superblock": false, 00:13:11.434 "num_base_bdevs": 4, 00:13:11.434 "num_base_bdevs_discovered": 3, 00:13:11.434 "num_base_bdevs_operational": 4, 00:13:11.434 "base_bdevs_list": [ 00:13:11.434 { 00:13:11.434 "name": null, 00:13:11.434 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:11.434 "is_configured": false, 00:13:11.434 "data_offset": 0, 00:13:11.434 "data_size": 65536 00:13:11.434 }, 00:13:11.434 { 00:13:11.434 "name": "BaseBdev2", 00:13:11.434 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:11.434 "is_configured": true, 00:13:11.434 "data_offset": 0, 00:13:11.434 "data_size": 65536 00:13:11.434 }, 00:13:11.434 { 00:13:11.434 "name": "BaseBdev3", 00:13:11.434 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:11.434 "is_configured": true, 00:13:11.434 "data_offset": 0, 00:13:11.434 "data_size": 65536 00:13:11.434 }, 00:13:11.434 { 00:13:11.434 "name": "BaseBdev4", 00:13:11.434 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:11.434 "is_configured": true, 00:13:11.434 "data_offset": 0, 00:13:11.434 "data_size": 65536 00:13:11.434 } 00:13:11.434 ] 00:13:11.434 }' 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.434 09:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ba1268d9-ec81-4f1f-859b-f8aec3b999b7 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.693 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.952 [2024-10-30 09:47:50.332539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:11.952 [2024-10-30 09:47:50.332584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:11.952 [2024-10-30 09:47:50.332590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:11.952 [2024-10-30 09:47:50.332793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:11.952 [2024-10-30 09:47:50.336560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:11.952 [2024-10-30 09:47:50.336582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:11.952 [2024-10-30 09:47:50.336772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.952 NewBaseBdev 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.952 [ 00:13:11.952 { 00:13:11.952 "name": "NewBaseBdev", 00:13:11.952 "aliases": [ 00:13:11.952 "ba1268d9-ec81-4f1f-859b-f8aec3b999b7" 00:13:11.952 ], 00:13:11.952 "product_name": "Malloc disk", 00:13:11.952 "block_size": 512, 00:13:11.952 "num_blocks": 65536, 00:13:11.952 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:11.952 "assigned_rate_limits": { 00:13:11.952 "rw_ios_per_sec": 0, 00:13:11.952 "rw_mbytes_per_sec": 0, 00:13:11.952 "r_mbytes_per_sec": 0, 00:13:11.952 "w_mbytes_per_sec": 0 00:13:11.952 }, 00:13:11.952 "claimed": true, 00:13:11.952 "claim_type": "exclusive_write", 00:13:11.952 "zoned": false, 00:13:11.952 "supported_io_types": { 00:13:11.952 "read": true, 00:13:11.952 "write": true, 00:13:11.952 "unmap": true, 00:13:11.952 "flush": true, 00:13:11.952 "reset": true, 00:13:11.952 "nvme_admin": false, 00:13:11.952 "nvme_io": false, 00:13:11.952 "nvme_io_md": false, 00:13:11.952 "write_zeroes": true, 00:13:11.952 "zcopy": true, 00:13:11.952 "get_zone_info": false, 00:13:11.952 "zone_management": false, 00:13:11.952 "zone_append": false, 00:13:11.952 "compare": false, 00:13:11.952 "compare_and_write": false, 00:13:11.952 "abort": true, 00:13:11.952 "seek_hole": false, 00:13:11.952 "seek_data": false, 00:13:11.952 "copy": true, 00:13:11.952 "nvme_iov_md": false 00:13:11.952 }, 00:13:11.952 "memory_domains": [ 00:13:11.952 { 00:13:11.952 "dma_device_id": "system", 00:13:11.952 "dma_device_type": 1 00:13:11.952 }, 00:13:11.952 { 00:13:11.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.952 "dma_device_type": 2 00:13:11.952 } 00:13:11.952 ], 00:13:11.952 "driver_specific": {} 00:13:11.952 } 00:13:11.952 ] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.952 "name": "Existed_Raid", 00:13:11.952 "uuid": "d16a4a79-8b04-4e6a-ae61-d5c5f0bdbc46", 00:13:11.952 "strip_size_kb": 64, 00:13:11.952 "state": "online", 00:13:11.952 "raid_level": "raid5f", 00:13:11.952 "superblock": false, 00:13:11.952 "num_base_bdevs": 4, 00:13:11.952 "num_base_bdevs_discovered": 4, 00:13:11.952 "num_base_bdevs_operational": 4, 00:13:11.952 "base_bdevs_list": [ 00:13:11.952 { 00:13:11.952 "name": "NewBaseBdev", 00:13:11.952 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:11.952 "is_configured": true, 00:13:11.952 "data_offset": 0, 00:13:11.952 "data_size": 65536 00:13:11.952 }, 00:13:11.952 { 00:13:11.952 "name": "BaseBdev2", 00:13:11.952 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:11.952 "is_configured": true, 00:13:11.952 "data_offset": 0, 00:13:11.952 "data_size": 65536 00:13:11.952 }, 00:13:11.952 { 00:13:11.952 "name": "BaseBdev3", 00:13:11.952 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:11.952 "is_configured": true, 00:13:11.952 "data_offset": 0, 00:13:11.952 "data_size": 65536 00:13:11.952 }, 00:13:11.952 { 00:13:11.952 "name": "BaseBdev4", 00:13:11.952 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:11.952 "is_configured": true, 00:13:11.952 "data_offset": 0, 00:13:11.952 "data_size": 65536 00:13:11.952 } 00:13:11.952 ] 00:13:11.952 }' 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.952 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.210 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:12.210 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.210 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.210 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.211 [2024-10-30 09:47:50.681266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.211 "name": "Existed_Raid", 00:13:12.211 "aliases": [ 00:13:12.211 "d16a4a79-8b04-4e6a-ae61-d5c5f0bdbc46" 00:13:12.211 ], 00:13:12.211 "product_name": "Raid Volume", 00:13:12.211 "block_size": 512, 00:13:12.211 "num_blocks": 196608, 00:13:12.211 "uuid": "d16a4a79-8b04-4e6a-ae61-d5c5f0bdbc46", 00:13:12.211 "assigned_rate_limits": { 00:13:12.211 "rw_ios_per_sec": 0, 00:13:12.211 "rw_mbytes_per_sec": 0, 00:13:12.211 "r_mbytes_per_sec": 0, 00:13:12.211 "w_mbytes_per_sec": 0 00:13:12.211 }, 00:13:12.211 "claimed": false, 00:13:12.211 "zoned": false, 00:13:12.211 "supported_io_types": { 00:13:12.211 "read": true, 00:13:12.211 "write": true, 00:13:12.211 "unmap": false, 00:13:12.211 "flush": false, 00:13:12.211 "reset": true, 00:13:12.211 "nvme_admin": false, 00:13:12.211 "nvme_io": false, 00:13:12.211 "nvme_io_md": false, 00:13:12.211 "write_zeroes": true, 00:13:12.211 "zcopy": false, 00:13:12.211 "get_zone_info": false, 00:13:12.211 "zone_management": false, 00:13:12.211 "zone_append": false, 00:13:12.211 "compare": false, 00:13:12.211 "compare_and_write": false, 00:13:12.211 "abort": false, 00:13:12.211 "seek_hole": false, 00:13:12.211 "seek_data": false, 00:13:12.211 "copy": false, 00:13:12.211 "nvme_iov_md": false 00:13:12.211 }, 00:13:12.211 "driver_specific": { 00:13:12.211 "raid": { 00:13:12.211 "uuid": "d16a4a79-8b04-4e6a-ae61-d5c5f0bdbc46", 00:13:12.211 "strip_size_kb": 64, 00:13:12.211 "state": "online", 00:13:12.211 "raid_level": "raid5f", 00:13:12.211 "superblock": false, 00:13:12.211 "num_base_bdevs": 4, 00:13:12.211 "num_base_bdevs_discovered": 4, 00:13:12.211 "num_base_bdevs_operational": 4, 00:13:12.211 "base_bdevs_list": [ 00:13:12.211 { 00:13:12.211 "name": "NewBaseBdev", 00:13:12.211 "uuid": "ba1268d9-ec81-4f1f-859b-f8aec3b999b7", 00:13:12.211 "is_configured": true, 00:13:12.211 "data_offset": 0, 00:13:12.211 "data_size": 65536 00:13:12.211 }, 00:13:12.211 { 00:13:12.211 "name": "BaseBdev2", 00:13:12.211 "uuid": "fffa86b8-c1e7-4e78-bffb-1c3568bc4629", 00:13:12.211 "is_configured": true, 00:13:12.211 "data_offset": 0, 00:13:12.211 "data_size": 65536 00:13:12.211 }, 00:13:12.211 { 00:13:12.211 "name": "BaseBdev3", 00:13:12.211 "uuid": "5dff6895-f6f0-4624-b8f5-1b465a0e05d7", 00:13:12.211 "is_configured": true, 00:13:12.211 "data_offset": 0, 00:13:12.211 "data_size": 65536 00:13:12.211 }, 00:13:12.211 { 00:13:12.211 "name": "BaseBdev4", 00:13:12.211 "uuid": "3546c354-dd94-4ae7-9fad-572764a69fc9", 00:13:12.211 "is_configured": true, 00:13:12.211 "data_offset": 0, 00:13:12.211 "data_size": 65536 00:13:12.211 } 00:13:12.211 ] 00:13:12.211 } 00:13:12.211 } 00:13:12.211 }' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:12.211 BaseBdev2 00:13:12.211 BaseBdev3 00:13:12.211 BaseBdev4' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.211 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.469 [2024-10-30 09:47:50.905111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.469 [2024-10-30 09:47:50.905134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.469 [2024-10-30 09:47:50.905186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.469 [2024-10-30 09:47:50.905417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.469 [2024-10-30 09:47:50.905433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80486 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80486 ']' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80486 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80486 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80486' 00:13:12.469 killing process with pid 80486 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80486 00:13:12.469 [2024-10-30 09:47:50.933383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.469 09:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80486 00:13:12.727 [2024-10-30 09:47:51.125338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:13.292 00:13:13.292 real 0m8.075s 00:13:13.292 user 0m13.083s 00:13:13.292 sys 0m1.379s 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.292 ************************************ 00:13:13.292 END TEST raid5f_state_function_test 00:13:13.292 ************************************ 00:13:13.292 09:47:51 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:13.292 09:47:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:13.292 09:47:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.292 09:47:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.292 ************************************ 00:13:13.292 START TEST raid5f_state_function_test_sb 00:13:13.292 ************************************ 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81119 00:13:13.292 Process raid pid: 81119 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81119' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81119 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81119 ']' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:13.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.292 09:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:13.292 [2024-10-30 09:47:51.800988] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:13:13.292 [2024-10-30 09:47:51.801117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.550 [2024-10-30 09:47:51.958754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.550 [2024-10-30 09:47:52.057489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.808 [2024-10-30 09:47:52.193885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.808 [2024-10-30 09:47:52.193935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.372 [2024-10-30 09:47:52.714775] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.372 [2024-10-30 09:47:52.714961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.372 [2024-10-30 09:47:52.714974] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.372 [2024-10-30 09:47:52.715024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.372 [2024-10-30 09:47:52.715032] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.372 [2024-10-30 09:47:52.715091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.372 [2024-10-30 09:47:52.715100] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.372 [2024-10-30 09:47:52.715145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.372 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.372 "name": "Existed_Raid", 00:13:14.372 "uuid": "58bd8900-51d9-41f7-b6bf-a9e90a207386", 00:13:14.372 "strip_size_kb": 64, 00:13:14.372 "state": "configuring", 00:13:14.372 "raid_level": "raid5f", 00:13:14.372 "superblock": true, 00:13:14.372 "num_base_bdevs": 4, 00:13:14.373 "num_base_bdevs_discovered": 0, 00:13:14.373 "num_base_bdevs_operational": 4, 00:13:14.373 "base_bdevs_list": [ 00:13:14.373 { 00:13:14.373 "name": "BaseBdev1", 00:13:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.373 "is_configured": false, 00:13:14.373 "data_offset": 0, 00:13:14.373 "data_size": 0 00:13:14.373 }, 00:13:14.373 { 00:13:14.373 "name": "BaseBdev2", 00:13:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.373 "is_configured": false, 00:13:14.373 "data_offset": 0, 00:13:14.373 "data_size": 0 00:13:14.373 }, 00:13:14.373 { 00:13:14.373 "name": "BaseBdev3", 00:13:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.373 "is_configured": false, 00:13:14.373 "data_offset": 0, 00:13:14.373 "data_size": 0 00:13:14.373 }, 00:13:14.373 { 00:13:14.373 "name": "BaseBdev4", 00:13:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.373 "is_configured": false, 00:13:14.373 "data_offset": 0, 00:13:14.373 "data_size": 0 00:13:14.373 } 00:13:14.373 ] 00:13:14.373 }' 00:13:14.373 09:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.373 09:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 [2024-10-30 09:47:53.046793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.630 [2024-10-30 09:47:53.046832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 [2024-10-30 09:47:53.054810] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.630 [2024-10-30 09:47:53.055158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.630 [2024-10-30 09:47:53.055180] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.630 [2024-10-30 09:47:53.055195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.630 [2024-10-30 09:47:53.055202] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.630 [2024-10-30 09:47:53.055211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.630 [2024-10-30 09:47:53.055217] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.630 [2024-10-30 09:47:53.055226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 [2024-10-30 09:47:53.087004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.630 BaseBdev1 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.630 [ 00:13:14.630 { 00:13:14.630 "name": "BaseBdev1", 00:13:14.630 "aliases": [ 00:13:14.630 "e8de49fb-fa25-4c95-8a48-d46a2b2c912e" 00:13:14.630 ], 00:13:14.630 "product_name": "Malloc disk", 00:13:14.630 "block_size": 512, 00:13:14.630 "num_blocks": 65536, 00:13:14.630 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:14.630 "assigned_rate_limits": { 00:13:14.630 "rw_ios_per_sec": 0, 00:13:14.630 "rw_mbytes_per_sec": 0, 00:13:14.630 "r_mbytes_per_sec": 0, 00:13:14.630 "w_mbytes_per_sec": 0 00:13:14.630 }, 00:13:14.630 "claimed": true, 00:13:14.630 "claim_type": "exclusive_write", 00:13:14.630 "zoned": false, 00:13:14.630 "supported_io_types": { 00:13:14.630 "read": true, 00:13:14.630 "write": true, 00:13:14.630 "unmap": true, 00:13:14.630 "flush": true, 00:13:14.630 "reset": true, 00:13:14.630 "nvme_admin": false, 00:13:14.630 "nvme_io": false, 00:13:14.630 "nvme_io_md": false, 00:13:14.630 "write_zeroes": true, 00:13:14.630 "zcopy": true, 00:13:14.630 "get_zone_info": false, 00:13:14.630 "zone_management": false, 00:13:14.630 "zone_append": false, 00:13:14.630 "compare": false, 00:13:14.630 "compare_and_write": false, 00:13:14.630 "abort": true, 00:13:14.630 "seek_hole": false, 00:13:14.630 "seek_data": false, 00:13:14.630 "copy": true, 00:13:14.630 "nvme_iov_md": false 00:13:14.630 }, 00:13:14.630 "memory_domains": [ 00:13:14.630 { 00:13:14.630 "dma_device_id": "system", 00:13:14.630 "dma_device_type": 1 00:13:14.630 }, 00:13:14.630 { 00:13:14.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.630 "dma_device_type": 2 00:13:14.630 } 00:13:14.630 ], 00:13:14.630 "driver_specific": {} 00:13:14.630 } 00:13:14.630 ] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.630 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.631 "name": "Existed_Raid", 00:13:14.631 "uuid": "2843f598-fddc-49ee-b8c9-d741bc251d57", 00:13:14.631 "strip_size_kb": 64, 00:13:14.631 "state": "configuring", 00:13:14.631 "raid_level": "raid5f", 00:13:14.631 "superblock": true, 00:13:14.631 "num_base_bdevs": 4, 00:13:14.631 "num_base_bdevs_discovered": 1, 00:13:14.631 "num_base_bdevs_operational": 4, 00:13:14.631 "base_bdevs_list": [ 00:13:14.631 { 00:13:14.631 "name": "BaseBdev1", 00:13:14.631 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:14.631 "is_configured": true, 00:13:14.631 "data_offset": 2048, 00:13:14.631 "data_size": 63488 00:13:14.631 }, 00:13:14.631 { 00:13:14.631 "name": "BaseBdev2", 00:13:14.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.631 "is_configured": false, 00:13:14.631 "data_offset": 0, 00:13:14.631 "data_size": 0 00:13:14.631 }, 00:13:14.631 { 00:13:14.631 "name": "BaseBdev3", 00:13:14.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.631 "is_configured": false, 00:13:14.631 "data_offset": 0, 00:13:14.631 "data_size": 0 00:13:14.631 }, 00:13:14.631 { 00:13:14.631 "name": "BaseBdev4", 00:13:14.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.631 "is_configured": false, 00:13:14.631 "data_offset": 0, 00:13:14.631 "data_size": 0 00:13:14.631 } 00:13:14.631 ] 00:13:14.631 }' 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.631 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.889 [2024-10-30 09:47:53.395118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.889 [2024-10-30 09:47:53.395164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.889 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.889 [2024-10-30 09:47:53.403182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.889 [2024-10-30 09:47:53.405034] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.889 [2024-10-30 09:47:53.405085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.889 [2024-10-30 09:47:53.405095] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.889 [2024-10-30 09:47:53.405106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.890 [2024-10-30 09:47:53.405112] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.890 [2024-10-30 09:47:53.405121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.890 "name": "Existed_Raid", 00:13:14.890 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:14.890 "strip_size_kb": 64, 00:13:14.890 "state": "configuring", 00:13:14.890 "raid_level": "raid5f", 00:13:14.890 "superblock": true, 00:13:14.890 "num_base_bdevs": 4, 00:13:14.890 "num_base_bdevs_discovered": 1, 00:13:14.890 "num_base_bdevs_operational": 4, 00:13:14.890 "base_bdevs_list": [ 00:13:14.890 { 00:13:14.890 "name": "BaseBdev1", 00:13:14.890 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:14.890 "is_configured": true, 00:13:14.890 "data_offset": 2048, 00:13:14.890 "data_size": 63488 00:13:14.890 }, 00:13:14.890 { 00:13:14.890 "name": "BaseBdev2", 00:13:14.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.890 "is_configured": false, 00:13:14.890 "data_offset": 0, 00:13:14.890 "data_size": 0 00:13:14.890 }, 00:13:14.890 { 00:13:14.890 "name": "BaseBdev3", 00:13:14.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.890 "is_configured": false, 00:13:14.890 "data_offset": 0, 00:13:14.890 "data_size": 0 00:13:14.890 }, 00:13:14.890 { 00:13:14.890 "name": "BaseBdev4", 00:13:14.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.890 "is_configured": false, 00:13:14.890 "data_offset": 0, 00:13:14.890 "data_size": 0 00:13:14.890 } 00:13:14.890 ] 00:13:14.890 }' 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.890 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 [2024-10-30 09:47:53.741510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.148 BaseBdev2 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 [ 00:13:15.148 { 00:13:15.148 "name": "BaseBdev2", 00:13:15.148 "aliases": [ 00:13:15.148 "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757" 00:13:15.148 ], 00:13:15.148 "product_name": "Malloc disk", 00:13:15.148 "block_size": 512, 00:13:15.148 "num_blocks": 65536, 00:13:15.148 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:15.148 "assigned_rate_limits": { 00:13:15.148 "rw_ios_per_sec": 0, 00:13:15.148 "rw_mbytes_per_sec": 0, 00:13:15.148 "r_mbytes_per_sec": 0, 00:13:15.148 "w_mbytes_per_sec": 0 00:13:15.148 }, 00:13:15.148 "claimed": true, 00:13:15.148 "claim_type": "exclusive_write", 00:13:15.148 "zoned": false, 00:13:15.148 "supported_io_types": { 00:13:15.148 "read": true, 00:13:15.148 "write": true, 00:13:15.148 "unmap": true, 00:13:15.148 "flush": true, 00:13:15.148 "reset": true, 00:13:15.148 "nvme_admin": false, 00:13:15.148 "nvme_io": false, 00:13:15.148 "nvme_io_md": false, 00:13:15.148 "write_zeroes": true, 00:13:15.148 "zcopy": true, 00:13:15.148 "get_zone_info": false, 00:13:15.148 "zone_management": false, 00:13:15.148 "zone_append": false, 00:13:15.148 "compare": false, 00:13:15.148 "compare_and_write": false, 00:13:15.148 "abort": true, 00:13:15.148 "seek_hole": false, 00:13:15.148 "seek_data": false, 00:13:15.148 "copy": true, 00:13:15.148 "nvme_iov_md": false 00:13:15.148 }, 00:13:15.148 "memory_domains": [ 00:13:15.148 { 00:13:15.148 "dma_device_id": "system", 00:13:15.148 "dma_device_type": 1 00:13:15.148 }, 00:13:15.148 { 00:13:15.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.148 "dma_device_type": 2 00:13:15.148 } 00:13:15.148 ], 00:13:15.148 "driver_specific": {} 00:13:15.148 } 00:13:15.148 ] 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.406 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.406 "name": "Existed_Raid", 00:13:15.406 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:15.406 "strip_size_kb": 64, 00:13:15.406 "state": "configuring", 00:13:15.406 "raid_level": "raid5f", 00:13:15.406 "superblock": true, 00:13:15.406 "num_base_bdevs": 4, 00:13:15.406 "num_base_bdevs_discovered": 2, 00:13:15.406 "num_base_bdevs_operational": 4, 00:13:15.406 "base_bdevs_list": [ 00:13:15.406 { 00:13:15.406 "name": "BaseBdev1", 00:13:15.406 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:15.406 "is_configured": true, 00:13:15.406 "data_offset": 2048, 00:13:15.406 "data_size": 63488 00:13:15.406 }, 00:13:15.406 { 00:13:15.406 "name": "BaseBdev2", 00:13:15.406 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:15.406 "is_configured": true, 00:13:15.406 "data_offset": 2048, 00:13:15.406 "data_size": 63488 00:13:15.406 }, 00:13:15.406 { 00:13:15.406 "name": "BaseBdev3", 00:13:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.406 "is_configured": false, 00:13:15.406 "data_offset": 0, 00:13:15.406 "data_size": 0 00:13:15.406 }, 00:13:15.406 { 00:13:15.406 "name": "BaseBdev4", 00:13:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.406 "is_configured": false, 00:13:15.406 "data_offset": 0, 00:13:15.406 "data_size": 0 00:13:15.406 } 00:13:15.406 ] 00:13:15.406 }' 00:13:15.406 09:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.406 09:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 [2024-10-30 09:47:54.122259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.664 BaseBdev3 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 [ 00:13:15.664 { 00:13:15.664 "name": "BaseBdev3", 00:13:15.664 "aliases": [ 00:13:15.664 "b58aea05-24b9-4769-8365-d6afa0928fdd" 00:13:15.664 ], 00:13:15.664 "product_name": "Malloc disk", 00:13:15.664 "block_size": 512, 00:13:15.664 "num_blocks": 65536, 00:13:15.664 "uuid": "b58aea05-24b9-4769-8365-d6afa0928fdd", 00:13:15.664 "assigned_rate_limits": { 00:13:15.664 "rw_ios_per_sec": 0, 00:13:15.664 "rw_mbytes_per_sec": 0, 00:13:15.664 "r_mbytes_per_sec": 0, 00:13:15.664 "w_mbytes_per_sec": 0 00:13:15.664 }, 00:13:15.664 "claimed": true, 00:13:15.664 "claim_type": "exclusive_write", 00:13:15.664 "zoned": false, 00:13:15.664 "supported_io_types": { 00:13:15.664 "read": true, 00:13:15.664 "write": true, 00:13:15.664 "unmap": true, 00:13:15.664 "flush": true, 00:13:15.664 "reset": true, 00:13:15.664 "nvme_admin": false, 00:13:15.664 "nvme_io": false, 00:13:15.664 "nvme_io_md": false, 00:13:15.664 "write_zeroes": true, 00:13:15.664 "zcopy": true, 00:13:15.664 "get_zone_info": false, 00:13:15.664 "zone_management": false, 00:13:15.664 "zone_append": false, 00:13:15.664 "compare": false, 00:13:15.664 "compare_and_write": false, 00:13:15.664 "abort": true, 00:13:15.664 "seek_hole": false, 00:13:15.664 "seek_data": false, 00:13:15.664 "copy": true, 00:13:15.664 "nvme_iov_md": false 00:13:15.664 }, 00:13:15.664 "memory_domains": [ 00:13:15.664 { 00:13:15.664 "dma_device_id": "system", 00:13:15.664 "dma_device_type": 1 00:13:15.664 }, 00:13:15.664 { 00:13:15.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.664 "dma_device_type": 2 00:13:15.664 } 00:13:15.664 ], 00:13:15.664 "driver_specific": {} 00:13:15.664 } 00:13:15.664 ] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.664 "name": "Existed_Raid", 00:13:15.664 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:15.664 "strip_size_kb": 64, 00:13:15.664 "state": "configuring", 00:13:15.664 "raid_level": "raid5f", 00:13:15.664 "superblock": true, 00:13:15.664 "num_base_bdevs": 4, 00:13:15.664 "num_base_bdevs_discovered": 3, 00:13:15.664 "num_base_bdevs_operational": 4, 00:13:15.664 "base_bdevs_list": [ 00:13:15.664 { 00:13:15.664 "name": "BaseBdev1", 00:13:15.664 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:15.664 "is_configured": true, 00:13:15.664 "data_offset": 2048, 00:13:15.664 "data_size": 63488 00:13:15.664 }, 00:13:15.664 { 00:13:15.664 "name": "BaseBdev2", 00:13:15.664 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:15.664 "is_configured": true, 00:13:15.664 "data_offset": 2048, 00:13:15.664 "data_size": 63488 00:13:15.664 }, 00:13:15.664 { 00:13:15.664 "name": "BaseBdev3", 00:13:15.664 "uuid": "b58aea05-24b9-4769-8365-d6afa0928fdd", 00:13:15.664 "is_configured": true, 00:13:15.664 "data_offset": 2048, 00:13:15.664 "data_size": 63488 00:13:15.664 }, 00:13:15.664 { 00:13:15.664 "name": "BaseBdev4", 00:13:15.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.664 "is_configured": false, 00:13:15.664 "data_offset": 0, 00:13:15.664 "data_size": 0 00:13:15.664 } 00:13:15.664 ] 00:13:15.664 }' 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.664 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 [2024-10-30 09:47:54.468713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.923 [2024-10-30 09:47:54.468957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:15.923 [2024-10-30 09:47:54.468972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:15.923 BaseBdev4 00:13:15.923 [2024-10-30 09:47:54.469245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 [2024-10-30 09:47:54.474157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:15.923 [2024-10-30 09:47:54.474181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:15.923 [2024-10-30 09:47:54.474404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 [ 00:13:15.923 { 00:13:15.923 "name": "BaseBdev4", 00:13:15.923 "aliases": [ 00:13:15.923 "7bcd89da-b34e-420a-b208-ba14fbde5d82" 00:13:15.923 ], 00:13:15.923 "product_name": "Malloc disk", 00:13:15.923 "block_size": 512, 00:13:15.923 "num_blocks": 65536, 00:13:15.923 "uuid": "7bcd89da-b34e-420a-b208-ba14fbde5d82", 00:13:15.923 "assigned_rate_limits": { 00:13:15.923 "rw_ios_per_sec": 0, 00:13:15.923 "rw_mbytes_per_sec": 0, 00:13:15.923 "r_mbytes_per_sec": 0, 00:13:15.923 "w_mbytes_per_sec": 0 00:13:15.923 }, 00:13:15.923 "claimed": true, 00:13:15.923 "claim_type": "exclusive_write", 00:13:15.923 "zoned": false, 00:13:15.923 "supported_io_types": { 00:13:15.923 "read": true, 00:13:15.923 "write": true, 00:13:15.923 "unmap": true, 00:13:15.923 "flush": true, 00:13:15.923 "reset": true, 00:13:15.923 "nvme_admin": false, 00:13:15.923 "nvme_io": false, 00:13:15.923 "nvme_io_md": false, 00:13:15.923 "write_zeroes": true, 00:13:15.923 "zcopy": true, 00:13:15.923 "get_zone_info": false, 00:13:15.923 "zone_management": false, 00:13:15.923 "zone_append": false, 00:13:15.923 "compare": false, 00:13:15.923 "compare_and_write": false, 00:13:15.923 "abort": true, 00:13:15.923 "seek_hole": false, 00:13:15.923 "seek_data": false, 00:13:15.923 "copy": true, 00:13:15.923 "nvme_iov_md": false 00:13:15.923 }, 00:13:15.923 "memory_domains": [ 00:13:15.923 { 00:13:15.923 "dma_device_id": "system", 00:13:15.923 "dma_device_type": 1 00:13:15.923 }, 00:13:15.923 { 00:13:15.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.923 "dma_device_type": 2 00:13:15.923 } 00:13:15.923 ], 00:13:15.923 "driver_specific": {} 00:13:15.923 } 00:13:15.923 ] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.923 "name": "Existed_Raid", 00:13:15.923 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:15.923 "strip_size_kb": 64, 00:13:15.923 "state": "online", 00:13:15.923 "raid_level": "raid5f", 00:13:15.923 "superblock": true, 00:13:15.923 "num_base_bdevs": 4, 00:13:15.923 "num_base_bdevs_discovered": 4, 00:13:15.923 "num_base_bdevs_operational": 4, 00:13:15.923 "base_bdevs_list": [ 00:13:15.923 { 00:13:15.923 "name": "BaseBdev1", 00:13:15.923 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:15.923 "is_configured": true, 00:13:15.923 "data_offset": 2048, 00:13:15.923 "data_size": 63488 00:13:15.923 }, 00:13:15.923 { 00:13:15.923 "name": "BaseBdev2", 00:13:15.923 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:15.923 "is_configured": true, 00:13:15.923 "data_offset": 2048, 00:13:15.923 "data_size": 63488 00:13:15.923 }, 00:13:15.923 { 00:13:15.923 "name": "BaseBdev3", 00:13:15.923 "uuid": "b58aea05-24b9-4769-8365-d6afa0928fdd", 00:13:15.923 "is_configured": true, 00:13:15.923 "data_offset": 2048, 00:13:15.923 "data_size": 63488 00:13:15.923 }, 00:13:15.923 { 00:13:15.923 "name": "BaseBdev4", 00:13:15.923 "uuid": "7bcd89da-b34e-420a-b208-ba14fbde5d82", 00:13:15.923 "is_configured": true, 00:13:15.923 "data_offset": 2048, 00:13:15.923 "data_size": 63488 00:13:15.923 } 00:13:15.923 ] 00:13:15.923 }' 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.923 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 [2024-10-30 09:47:54.823834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:16.489 "name": "Existed_Raid", 00:13:16.489 "aliases": [ 00:13:16.489 "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347" 00:13:16.489 ], 00:13:16.489 "product_name": "Raid Volume", 00:13:16.489 "block_size": 512, 00:13:16.489 "num_blocks": 190464, 00:13:16.489 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:16.489 "assigned_rate_limits": { 00:13:16.489 "rw_ios_per_sec": 0, 00:13:16.489 "rw_mbytes_per_sec": 0, 00:13:16.489 "r_mbytes_per_sec": 0, 00:13:16.489 "w_mbytes_per_sec": 0 00:13:16.489 }, 00:13:16.489 "claimed": false, 00:13:16.489 "zoned": false, 00:13:16.489 "supported_io_types": { 00:13:16.489 "read": true, 00:13:16.489 "write": true, 00:13:16.489 "unmap": false, 00:13:16.489 "flush": false, 00:13:16.489 "reset": true, 00:13:16.489 "nvme_admin": false, 00:13:16.489 "nvme_io": false, 00:13:16.489 "nvme_io_md": false, 00:13:16.489 "write_zeroes": true, 00:13:16.489 "zcopy": false, 00:13:16.489 "get_zone_info": false, 00:13:16.489 "zone_management": false, 00:13:16.489 "zone_append": false, 00:13:16.489 "compare": false, 00:13:16.489 "compare_and_write": false, 00:13:16.489 "abort": false, 00:13:16.489 "seek_hole": false, 00:13:16.489 "seek_data": false, 00:13:16.489 "copy": false, 00:13:16.489 "nvme_iov_md": false 00:13:16.489 }, 00:13:16.489 "driver_specific": { 00:13:16.489 "raid": { 00:13:16.489 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:16.489 "strip_size_kb": 64, 00:13:16.489 "state": "online", 00:13:16.489 "raid_level": "raid5f", 00:13:16.489 "superblock": true, 00:13:16.489 "num_base_bdevs": 4, 00:13:16.489 "num_base_bdevs_discovered": 4, 00:13:16.489 "num_base_bdevs_operational": 4, 00:13:16.489 "base_bdevs_list": [ 00:13:16.489 { 00:13:16.489 "name": "BaseBdev1", 00:13:16.489 "uuid": "e8de49fb-fa25-4c95-8a48-d46a2b2c912e", 00:13:16.489 "is_configured": true, 00:13:16.489 "data_offset": 2048, 00:13:16.489 "data_size": 63488 00:13:16.489 }, 00:13:16.489 { 00:13:16.489 "name": "BaseBdev2", 00:13:16.489 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:16.489 "is_configured": true, 00:13:16.489 "data_offset": 2048, 00:13:16.489 "data_size": 63488 00:13:16.489 }, 00:13:16.489 { 00:13:16.489 "name": "BaseBdev3", 00:13:16.489 "uuid": "b58aea05-24b9-4769-8365-d6afa0928fdd", 00:13:16.489 "is_configured": true, 00:13:16.489 "data_offset": 2048, 00:13:16.489 "data_size": 63488 00:13:16.489 }, 00:13:16.489 { 00:13:16.489 "name": "BaseBdev4", 00:13:16.489 "uuid": "7bcd89da-b34e-420a-b208-ba14fbde5d82", 00:13:16.489 "is_configured": true, 00:13:16.489 "data_offset": 2048, 00:13:16.489 "data_size": 63488 00:13:16.489 } 00:13:16.489 ] 00:13:16.489 } 00:13:16.489 } 00:13:16.489 }' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:16.489 BaseBdev2 00:13:16.489 BaseBdev3 00:13:16.489 BaseBdev4' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.489 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.490 09:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.490 [2024-10-30 09:47:55.047709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.490 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.747 "name": "Existed_Raid", 00:13:16.747 "uuid": "370fc3a1-0df0-4e6d-84e7-ab9f3c02c347", 00:13:16.747 "strip_size_kb": 64, 00:13:16.747 "state": "online", 00:13:16.747 "raid_level": "raid5f", 00:13:16.747 "superblock": true, 00:13:16.747 "num_base_bdevs": 4, 00:13:16.747 "num_base_bdevs_discovered": 3, 00:13:16.747 "num_base_bdevs_operational": 3, 00:13:16.747 "base_bdevs_list": [ 00:13:16.747 { 00:13:16.747 "name": null, 00:13:16.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.747 "is_configured": false, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 63488 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": "BaseBdev2", 00:13:16.747 "uuid": "a64dfd3c-ba31-4ec4-ba14-a3fe0d3c7757", 00:13:16.747 "is_configured": true, 00:13:16.747 "data_offset": 2048, 00:13:16.747 "data_size": 63488 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": "BaseBdev3", 00:13:16.747 "uuid": "b58aea05-24b9-4769-8365-d6afa0928fdd", 00:13:16.747 "is_configured": true, 00:13:16.747 "data_offset": 2048, 00:13:16.747 "data_size": 63488 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": "BaseBdev4", 00:13:16.747 "uuid": "7bcd89da-b34e-420a-b208-ba14fbde5d82", 00:13:16.747 "is_configured": true, 00:13:16.747 "data_offset": 2048, 00:13:16.747 "data_size": 63488 00:13:16.747 } 00:13:16.747 ] 00:13:16.747 }' 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.747 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 [2024-10-30 09:47:55.465369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.006 [2024-10-30 09:47:55.465519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.006 [2024-10-30 09:47:55.524968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 [2024-10-30 09:47:55.565001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.265 [2024-10-30 09:47:55.652502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:17.265 [2024-10-30 09:47:55.652545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.265 BaseBdev2 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.265 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.265 [ 00:13:17.265 { 00:13:17.265 "name": "BaseBdev2", 00:13:17.266 "aliases": [ 00:13:17.266 "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c" 00:13:17.266 ], 00:13:17.266 "product_name": "Malloc disk", 00:13:17.266 "block_size": 512, 00:13:17.266 "num_blocks": 65536, 00:13:17.266 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:17.266 "assigned_rate_limits": { 00:13:17.266 "rw_ios_per_sec": 0, 00:13:17.266 "rw_mbytes_per_sec": 0, 00:13:17.266 "r_mbytes_per_sec": 0, 00:13:17.266 "w_mbytes_per_sec": 0 00:13:17.266 }, 00:13:17.266 "claimed": false, 00:13:17.266 "zoned": false, 00:13:17.266 "supported_io_types": { 00:13:17.266 "read": true, 00:13:17.266 "write": true, 00:13:17.266 "unmap": true, 00:13:17.266 "flush": true, 00:13:17.266 "reset": true, 00:13:17.266 "nvme_admin": false, 00:13:17.266 "nvme_io": false, 00:13:17.266 "nvme_io_md": false, 00:13:17.266 "write_zeroes": true, 00:13:17.266 "zcopy": true, 00:13:17.266 "get_zone_info": false, 00:13:17.266 "zone_management": false, 00:13:17.266 "zone_append": false, 00:13:17.266 "compare": false, 00:13:17.266 "compare_and_write": false, 00:13:17.266 "abort": true, 00:13:17.266 "seek_hole": false, 00:13:17.266 "seek_data": false, 00:13:17.266 "copy": true, 00:13:17.266 "nvme_iov_md": false 00:13:17.266 }, 00:13:17.266 "memory_domains": [ 00:13:17.266 { 00:13:17.266 "dma_device_id": "system", 00:13:17.266 "dma_device_type": 1 00:13:17.266 }, 00:13:17.266 { 00:13:17.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.266 "dma_device_type": 2 00:13:17.266 } 00:13:17.266 ], 00:13:17.266 "driver_specific": {} 00:13:17.266 } 00:13:17.266 ] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 BaseBdev3 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 [ 00:13:17.266 { 00:13:17.266 "name": "BaseBdev3", 00:13:17.266 "aliases": [ 00:13:17.266 "90083aa8-a7b2-4bd7-b537-4a1545968b7c" 00:13:17.266 ], 00:13:17.266 "product_name": "Malloc disk", 00:13:17.266 "block_size": 512, 00:13:17.266 "num_blocks": 65536, 00:13:17.266 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:17.266 "assigned_rate_limits": { 00:13:17.266 "rw_ios_per_sec": 0, 00:13:17.266 "rw_mbytes_per_sec": 0, 00:13:17.266 "r_mbytes_per_sec": 0, 00:13:17.266 "w_mbytes_per_sec": 0 00:13:17.266 }, 00:13:17.266 "claimed": false, 00:13:17.266 "zoned": false, 00:13:17.266 "supported_io_types": { 00:13:17.266 "read": true, 00:13:17.266 "write": true, 00:13:17.266 "unmap": true, 00:13:17.266 "flush": true, 00:13:17.266 "reset": true, 00:13:17.266 "nvme_admin": false, 00:13:17.266 "nvme_io": false, 00:13:17.266 "nvme_io_md": false, 00:13:17.266 "write_zeroes": true, 00:13:17.266 "zcopy": true, 00:13:17.266 "get_zone_info": false, 00:13:17.266 "zone_management": false, 00:13:17.266 "zone_append": false, 00:13:17.266 "compare": false, 00:13:17.266 "compare_and_write": false, 00:13:17.266 "abort": true, 00:13:17.266 "seek_hole": false, 00:13:17.266 "seek_data": false, 00:13:17.266 "copy": true, 00:13:17.266 "nvme_iov_md": false 00:13:17.266 }, 00:13:17.266 "memory_domains": [ 00:13:17.266 { 00:13:17.266 "dma_device_id": "system", 00:13:17.266 "dma_device_type": 1 00:13:17.266 }, 00:13:17.266 { 00:13:17.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.266 "dma_device_type": 2 00:13:17.266 } 00:13:17.266 ], 00:13:17.266 "driver_specific": {} 00:13:17.266 } 00:13:17.266 ] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 BaseBdev4 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.266 [ 00:13:17.266 { 00:13:17.266 "name": "BaseBdev4", 00:13:17.266 "aliases": [ 00:13:17.266 "8b9a7ae1-adff-4120-ad44-a6f268ceead3" 00:13:17.266 ], 00:13:17.266 "product_name": "Malloc disk", 00:13:17.266 "block_size": 512, 00:13:17.266 "num_blocks": 65536, 00:13:17.266 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:17.266 "assigned_rate_limits": { 00:13:17.266 "rw_ios_per_sec": 0, 00:13:17.266 "rw_mbytes_per_sec": 0, 00:13:17.266 "r_mbytes_per_sec": 0, 00:13:17.266 "w_mbytes_per_sec": 0 00:13:17.266 }, 00:13:17.266 "claimed": false, 00:13:17.266 "zoned": false, 00:13:17.266 "supported_io_types": { 00:13:17.266 "read": true, 00:13:17.266 "write": true, 00:13:17.266 "unmap": true, 00:13:17.266 "flush": true, 00:13:17.266 "reset": true, 00:13:17.266 "nvme_admin": false, 00:13:17.266 "nvme_io": false, 00:13:17.266 "nvme_io_md": false, 00:13:17.266 "write_zeroes": true, 00:13:17.266 "zcopy": true, 00:13:17.266 "get_zone_info": false, 00:13:17.266 "zone_management": false, 00:13:17.266 "zone_append": false, 00:13:17.266 "compare": false, 00:13:17.266 "compare_and_write": false, 00:13:17.266 "abort": true, 00:13:17.266 "seek_hole": false, 00:13:17.266 "seek_data": false, 00:13:17.266 "copy": true, 00:13:17.266 "nvme_iov_md": false 00:13:17.266 }, 00:13:17.266 "memory_domains": [ 00:13:17.266 { 00:13:17.266 "dma_device_id": "system", 00:13:17.266 "dma_device_type": 1 00:13:17.266 }, 00:13:17.266 { 00:13:17.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.266 "dma_device_type": 2 00:13:17.266 } 00:13:17.266 ], 00:13:17.266 "driver_specific": {} 00:13:17.266 } 00:13:17.266 ] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.266 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.524 [2024-10-30 09:47:55.886159] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.524 [2024-10-30 09:47:55.886198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.524 [2024-10-30 09:47:55.886215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.524 [2024-10-30 09:47:55.887681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.524 [2024-10-30 09:47:55.887725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.524 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.524 "name": "Existed_Raid", 00:13:17.524 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:17.524 "strip_size_kb": 64, 00:13:17.524 "state": "configuring", 00:13:17.524 "raid_level": "raid5f", 00:13:17.524 "superblock": true, 00:13:17.524 "num_base_bdevs": 4, 00:13:17.524 "num_base_bdevs_discovered": 3, 00:13:17.524 "num_base_bdevs_operational": 4, 00:13:17.524 "base_bdevs_list": [ 00:13:17.524 { 00:13:17.524 "name": "BaseBdev1", 00:13:17.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.524 "is_configured": false, 00:13:17.524 "data_offset": 0, 00:13:17.525 "data_size": 0 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev2", 00:13:17.525 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev3", 00:13:17.525 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev4", 00:13:17.525 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 } 00:13:17.525 ] 00:13:17.525 }' 00:13:17.525 09:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.525 09:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.782 [2024-10-30 09:47:56.218223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.782 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.782 "name": "Existed_Raid", 00:13:17.782 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:17.782 "strip_size_kb": 64, 00:13:17.782 "state": "configuring", 00:13:17.782 "raid_level": "raid5f", 00:13:17.782 "superblock": true, 00:13:17.782 "num_base_bdevs": 4, 00:13:17.782 "num_base_bdevs_discovered": 2, 00:13:17.782 "num_base_bdevs_operational": 4, 00:13:17.782 "base_bdevs_list": [ 00:13:17.782 { 00:13:17.782 "name": "BaseBdev1", 00:13:17.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.782 "is_configured": false, 00:13:17.782 "data_offset": 0, 00:13:17.782 "data_size": 0 00:13:17.782 }, 00:13:17.782 { 00:13:17.782 "name": null, 00:13:17.782 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:17.782 "is_configured": false, 00:13:17.782 "data_offset": 0, 00:13:17.782 "data_size": 63488 00:13:17.782 }, 00:13:17.782 { 00:13:17.782 "name": "BaseBdev3", 00:13:17.782 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:17.782 "is_configured": true, 00:13:17.782 "data_offset": 2048, 00:13:17.782 "data_size": 63488 00:13:17.782 }, 00:13:17.782 { 00:13:17.782 "name": "BaseBdev4", 00:13:17.782 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:17.783 "is_configured": true, 00:13:17.783 "data_offset": 2048, 00:13:17.783 "data_size": 63488 00:13:17.783 } 00:13:17.783 ] 00:13:17.783 }' 00:13:17.783 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.783 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:18.040 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.041 [2024-10-30 09:47:56.600548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.041 BaseBdev1 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.041 [ 00:13:18.041 { 00:13:18.041 "name": "BaseBdev1", 00:13:18.041 "aliases": [ 00:13:18.041 "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e" 00:13:18.041 ], 00:13:18.041 "product_name": "Malloc disk", 00:13:18.041 "block_size": 512, 00:13:18.041 "num_blocks": 65536, 00:13:18.041 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:18.041 "assigned_rate_limits": { 00:13:18.041 "rw_ios_per_sec": 0, 00:13:18.041 "rw_mbytes_per_sec": 0, 00:13:18.041 "r_mbytes_per_sec": 0, 00:13:18.041 "w_mbytes_per_sec": 0 00:13:18.041 }, 00:13:18.041 "claimed": true, 00:13:18.041 "claim_type": "exclusive_write", 00:13:18.041 "zoned": false, 00:13:18.041 "supported_io_types": { 00:13:18.041 "read": true, 00:13:18.041 "write": true, 00:13:18.041 "unmap": true, 00:13:18.041 "flush": true, 00:13:18.041 "reset": true, 00:13:18.041 "nvme_admin": false, 00:13:18.041 "nvme_io": false, 00:13:18.041 "nvme_io_md": false, 00:13:18.041 "write_zeroes": true, 00:13:18.041 "zcopy": true, 00:13:18.041 "get_zone_info": false, 00:13:18.041 "zone_management": false, 00:13:18.041 "zone_append": false, 00:13:18.041 "compare": false, 00:13:18.041 "compare_and_write": false, 00:13:18.041 "abort": true, 00:13:18.041 "seek_hole": false, 00:13:18.041 "seek_data": false, 00:13:18.041 "copy": true, 00:13:18.041 "nvme_iov_md": false 00:13:18.041 }, 00:13:18.041 "memory_domains": [ 00:13:18.041 { 00:13:18.041 "dma_device_id": "system", 00:13:18.041 "dma_device_type": 1 00:13:18.041 }, 00:13:18.041 { 00:13:18.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.041 "dma_device_type": 2 00:13:18.041 } 00:13:18.041 ], 00:13:18.041 "driver_specific": {} 00:13:18.041 } 00:13:18.041 ] 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.041 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.298 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.298 "name": "Existed_Raid", 00:13:18.299 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:18.299 "strip_size_kb": 64, 00:13:18.299 "state": "configuring", 00:13:18.299 "raid_level": "raid5f", 00:13:18.299 "superblock": true, 00:13:18.299 "num_base_bdevs": 4, 00:13:18.299 "num_base_bdevs_discovered": 3, 00:13:18.299 "num_base_bdevs_operational": 4, 00:13:18.299 "base_bdevs_list": [ 00:13:18.299 { 00:13:18.299 "name": "BaseBdev1", 00:13:18.299 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:18.299 "is_configured": true, 00:13:18.299 "data_offset": 2048, 00:13:18.299 "data_size": 63488 00:13:18.299 }, 00:13:18.299 { 00:13:18.299 "name": null, 00:13:18.299 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:18.299 "is_configured": false, 00:13:18.299 "data_offset": 0, 00:13:18.299 "data_size": 63488 00:13:18.299 }, 00:13:18.299 { 00:13:18.299 "name": "BaseBdev3", 00:13:18.299 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:18.299 "is_configured": true, 00:13:18.299 "data_offset": 2048, 00:13:18.299 "data_size": 63488 00:13:18.299 }, 00:13:18.299 { 00:13:18.299 "name": "BaseBdev4", 00:13:18.299 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:18.299 "is_configured": true, 00:13:18.299 "data_offset": 2048, 00:13:18.299 "data_size": 63488 00:13:18.299 } 00:13:18.299 ] 00:13:18.299 }' 00:13:18.299 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.299 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 [2024-10-30 09:47:56.964666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.556 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.557 09:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.557 "name": "Existed_Raid", 00:13:18.557 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:18.557 "strip_size_kb": 64, 00:13:18.557 "state": "configuring", 00:13:18.557 "raid_level": "raid5f", 00:13:18.557 "superblock": true, 00:13:18.557 "num_base_bdevs": 4, 00:13:18.557 "num_base_bdevs_discovered": 2, 00:13:18.557 "num_base_bdevs_operational": 4, 00:13:18.557 "base_bdevs_list": [ 00:13:18.557 { 00:13:18.557 "name": "BaseBdev1", 00:13:18.557 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:18.557 "is_configured": true, 00:13:18.557 "data_offset": 2048, 00:13:18.557 "data_size": 63488 00:13:18.557 }, 00:13:18.557 { 00:13:18.557 "name": null, 00:13:18.557 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:18.557 "is_configured": false, 00:13:18.557 "data_offset": 0, 00:13:18.557 "data_size": 63488 00:13:18.557 }, 00:13:18.557 { 00:13:18.557 "name": null, 00:13:18.557 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:18.557 "is_configured": false, 00:13:18.557 "data_offset": 0, 00:13:18.557 "data_size": 63488 00:13:18.557 }, 00:13:18.557 { 00:13:18.557 "name": "BaseBdev4", 00:13:18.557 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:18.557 "is_configured": true, 00:13:18.557 "data_offset": 2048, 00:13:18.557 "data_size": 63488 00:13:18.557 } 00:13:18.557 ] 00:13:18.557 }' 00:13:18.557 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.557 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.897 [2024-10-30 09:47:57.320735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.897 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.898 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.898 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.898 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.898 "name": "Existed_Raid", 00:13:18.898 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:18.898 "strip_size_kb": 64, 00:13:18.898 "state": "configuring", 00:13:18.898 "raid_level": "raid5f", 00:13:18.898 "superblock": true, 00:13:18.898 "num_base_bdevs": 4, 00:13:18.898 "num_base_bdevs_discovered": 3, 00:13:18.898 "num_base_bdevs_operational": 4, 00:13:18.898 "base_bdevs_list": [ 00:13:18.898 { 00:13:18.898 "name": "BaseBdev1", 00:13:18.898 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:18.898 "is_configured": true, 00:13:18.898 "data_offset": 2048, 00:13:18.898 "data_size": 63488 00:13:18.898 }, 00:13:18.898 { 00:13:18.898 "name": null, 00:13:18.898 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:18.898 "is_configured": false, 00:13:18.898 "data_offset": 0, 00:13:18.898 "data_size": 63488 00:13:18.898 }, 00:13:18.898 { 00:13:18.898 "name": "BaseBdev3", 00:13:18.898 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:18.898 "is_configured": true, 00:13:18.898 "data_offset": 2048, 00:13:18.898 "data_size": 63488 00:13:18.898 }, 00:13:18.898 { 00:13:18.898 "name": "BaseBdev4", 00:13:18.898 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:18.898 "is_configured": true, 00:13:18.898 "data_offset": 2048, 00:13:18.898 "data_size": 63488 00:13:18.898 } 00:13:18.898 ] 00:13:18.898 }' 00:13:18.898 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.898 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.157 [2024-10-30 09:47:57.648805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.157 "name": "Existed_Raid", 00:13:19.157 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:19.157 "strip_size_kb": 64, 00:13:19.157 "state": "configuring", 00:13:19.157 "raid_level": "raid5f", 00:13:19.157 "superblock": true, 00:13:19.157 "num_base_bdevs": 4, 00:13:19.157 "num_base_bdevs_discovered": 2, 00:13:19.157 "num_base_bdevs_operational": 4, 00:13:19.157 "base_bdevs_list": [ 00:13:19.157 { 00:13:19.157 "name": null, 00:13:19.157 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:19.157 "is_configured": false, 00:13:19.157 "data_offset": 0, 00:13:19.157 "data_size": 63488 00:13:19.157 }, 00:13:19.157 { 00:13:19.157 "name": null, 00:13:19.157 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:19.157 "is_configured": false, 00:13:19.157 "data_offset": 0, 00:13:19.157 "data_size": 63488 00:13:19.157 }, 00:13:19.157 { 00:13:19.157 "name": "BaseBdev3", 00:13:19.157 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:19.157 "is_configured": true, 00:13:19.157 "data_offset": 2048, 00:13:19.157 "data_size": 63488 00:13:19.157 }, 00:13:19.157 { 00:13:19.157 "name": "BaseBdev4", 00:13:19.157 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:19.157 "is_configured": true, 00:13:19.157 "data_offset": 2048, 00:13:19.157 "data_size": 63488 00:13:19.157 } 00:13:19.157 ] 00:13:19.157 }' 00:13:19.157 09:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.158 09:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.415 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.415 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.415 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.415 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.673 [2024-10-30 09:47:58.063181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.673 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.674 "name": "Existed_Raid", 00:13:19.674 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:19.674 "strip_size_kb": 64, 00:13:19.674 "state": "configuring", 00:13:19.674 "raid_level": "raid5f", 00:13:19.674 "superblock": true, 00:13:19.674 "num_base_bdevs": 4, 00:13:19.674 "num_base_bdevs_discovered": 3, 00:13:19.674 "num_base_bdevs_operational": 4, 00:13:19.674 "base_bdevs_list": [ 00:13:19.674 { 00:13:19.674 "name": null, 00:13:19.674 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:19.674 "is_configured": false, 00:13:19.674 "data_offset": 0, 00:13:19.674 "data_size": 63488 00:13:19.674 }, 00:13:19.674 { 00:13:19.674 "name": "BaseBdev2", 00:13:19.674 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:19.674 "is_configured": true, 00:13:19.674 "data_offset": 2048, 00:13:19.674 "data_size": 63488 00:13:19.674 }, 00:13:19.674 { 00:13:19.674 "name": "BaseBdev3", 00:13:19.674 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:19.674 "is_configured": true, 00:13:19.674 "data_offset": 2048, 00:13:19.674 "data_size": 63488 00:13:19.674 }, 00:13:19.674 { 00:13:19.674 "name": "BaseBdev4", 00:13:19.674 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:19.674 "is_configured": true, 00:13:19.674 "data_offset": 2048, 00:13:19.674 "data_size": 63488 00:13:19.674 } 00:13:19.674 ] 00:13:19.674 }' 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.674 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 [2024-10-30 09:47:58.461965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:19.935 [2024-10-30 09:47:58.462144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:19.935 [2024-10-30 09:47:58.462154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:19.935 [2024-10-30 09:47:58.462357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:19.935 NewBaseBdev 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 [2024-10-30 09:47:58.466162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:19.935 [2024-10-30 09:47:58.466180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:19.935 [2024-10-30 09:47:58.466294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 [ 00:13:19.935 { 00:13:19.935 "name": "NewBaseBdev", 00:13:19.935 "aliases": [ 00:13:19.935 "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e" 00:13:19.935 ], 00:13:19.935 "product_name": "Malloc disk", 00:13:19.935 "block_size": 512, 00:13:19.935 "num_blocks": 65536, 00:13:19.935 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:19.935 "assigned_rate_limits": { 00:13:19.935 "rw_ios_per_sec": 0, 00:13:19.935 "rw_mbytes_per_sec": 0, 00:13:19.935 "r_mbytes_per_sec": 0, 00:13:19.935 "w_mbytes_per_sec": 0 00:13:19.935 }, 00:13:19.935 "claimed": true, 00:13:19.935 "claim_type": "exclusive_write", 00:13:19.935 "zoned": false, 00:13:19.935 "supported_io_types": { 00:13:19.935 "read": true, 00:13:19.935 "write": true, 00:13:19.935 "unmap": true, 00:13:19.935 "flush": true, 00:13:19.935 "reset": true, 00:13:19.935 "nvme_admin": false, 00:13:19.935 "nvme_io": false, 00:13:19.935 "nvme_io_md": false, 00:13:19.935 "write_zeroes": true, 00:13:19.935 "zcopy": true, 00:13:19.935 "get_zone_info": false, 00:13:19.935 "zone_management": false, 00:13:19.935 "zone_append": false, 00:13:19.935 "compare": false, 00:13:19.935 "compare_and_write": false, 00:13:19.935 "abort": true, 00:13:19.935 "seek_hole": false, 00:13:19.935 "seek_data": false, 00:13:19.935 "copy": true, 00:13:19.935 "nvme_iov_md": false 00:13:19.935 }, 00:13:19.935 "memory_domains": [ 00:13:19.935 { 00:13:19.935 "dma_device_id": "system", 00:13:19.935 "dma_device_type": 1 00:13:19.935 }, 00:13:19.935 { 00:13:19.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.935 "dma_device_type": 2 00:13:19.935 } 00:13:19.935 ], 00:13:19.935 "driver_specific": {} 00:13:19.935 } 00:13:19.935 ] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.935 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.935 "name": "Existed_Raid", 00:13:19.935 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:19.935 "strip_size_kb": 64, 00:13:19.935 "state": "online", 00:13:19.935 "raid_level": "raid5f", 00:13:19.935 "superblock": true, 00:13:19.935 "num_base_bdevs": 4, 00:13:19.935 "num_base_bdevs_discovered": 4, 00:13:19.935 "num_base_bdevs_operational": 4, 00:13:19.935 "base_bdevs_list": [ 00:13:19.935 { 00:13:19.935 "name": "NewBaseBdev", 00:13:19.935 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:19.935 "is_configured": true, 00:13:19.935 "data_offset": 2048, 00:13:19.935 "data_size": 63488 00:13:19.935 }, 00:13:19.935 { 00:13:19.935 "name": "BaseBdev2", 00:13:19.935 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:19.935 "is_configured": true, 00:13:19.935 "data_offset": 2048, 00:13:19.935 "data_size": 63488 00:13:19.935 }, 00:13:19.935 { 00:13:19.935 "name": "BaseBdev3", 00:13:19.935 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:19.935 "is_configured": true, 00:13:19.936 "data_offset": 2048, 00:13:19.936 "data_size": 63488 00:13:19.936 }, 00:13:19.936 { 00:13:19.936 "name": "BaseBdev4", 00:13:19.936 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:19.936 "is_configured": true, 00:13:19.936 "data_offset": 2048, 00:13:19.936 "data_size": 63488 00:13:19.936 } 00:13:19.936 ] 00:13:19.936 }' 00:13:19.936 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.936 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:20.192 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.192 [2024-10-30 09:47:58.806738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:20.449 "name": "Existed_Raid", 00:13:20.449 "aliases": [ 00:13:20.449 "7ea26f67-4ffd-41c6-adc3-dab818e1d891" 00:13:20.449 ], 00:13:20.449 "product_name": "Raid Volume", 00:13:20.449 "block_size": 512, 00:13:20.449 "num_blocks": 190464, 00:13:20.449 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:20.449 "assigned_rate_limits": { 00:13:20.449 "rw_ios_per_sec": 0, 00:13:20.449 "rw_mbytes_per_sec": 0, 00:13:20.449 "r_mbytes_per_sec": 0, 00:13:20.449 "w_mbytes_per_sec": 0 00:13:20.449 }, 00:13:20.449 "claimed": false, 00:13:20.449 "zoned": false, 00:13:20.449 "supported_io_types": { 00:13:20.449 "read": true, 00:13:20.449 "write": true, 00:13:20.449 "unmap": false, 00:13:20.449 "flush": false, 00:13:20.449 "reset": true, 00:13:20.449 "nvme_admin": false, 00:13:20.449 "nvme_io": false, 00:13:20.449 "nvme_io_md": false, 00:13:20.449 "write_zeroes": true, 00:13:20.449 "zcopy": false, 00:13:20.449 "get_zone_info": false, 00:13:20.449 "zone_management": false, 00:13:20.449 "zone_append": false, 00:13:20.449 "compare": false, 00:13:20.449 "compare_and_write": false, 00:13:20.449 "abort": false, 00:13:20.449 "seek_hole": false, 00:13:20.449 "seek_data": false, 00:13:20.449 "copy": false, 00:13:20.449 "nvme_iov_md": false 00:13:20.449 }, 00:13:20.449 "driver_specific": { 00:13:20.449 "raid": { 00:13:20.449 "uuid": "7ea26f67-4ffd-41c6-adc3-dab818e1d891", 00:13:20.449 "strip_size_kb": 64, 00:13:20.449 "state": "online", 00:13:20.449 "raid_level": "raid5f", 00:13:20.449 "superblock": true, 00:13:20.449 "num_base_bdevs": 4, 00:13:20.449 "num_base_bdevs_discovered": 4, 00:13:20.449 "num_base_bdevs_operational": 4, 00:13:20.449 "base_bdevs_list": [ 00:13:20.449 { 00:13:20.449 "name": "NewBaseBdev", 00:13:20.449 "uuid": "fc941ecb-cea7-4e1a-b16e-f11c9a2d1c4e", 00:13:20.449 "is_configured": true, 00:13:20.449 "data_offset": 2048, 00:13:20.449 "data_size": 63488 00:13:20.449 }, 00:13:20.449 { 00:13:20.449 "name": "BaseBdev2", 00:13:20.449 "uuid": "470d8ca8-ed81-4da3-a88e-f2a18d19fb1c", 00:13:20.449 "is_configured": true, 00:13:20.449 "data_offset": 2048, 00:13:20.449 "data_size": 63488 00:13:20.449 }, 00:13:20.449 { 00:13:20.449 "name": "BaseBdev3", 00:13:20.449 "uuid": "90083aa8-a7b2-4bd7-b537-4a1545968b7c", 00:13:20.449 "is_configured": true, 00:13:20.449 "data_offset": 2048, 00:13:20.449 "data_size": 63488 00:13:20.449 }, 00:13:20.449 { 00:13:20.449 "name": "BaseBdev4", 00:13:20.449 "uuid": "8b9a7ae1-adff-4120-ad44-a6f268ceead3", 00:13:20.449 "is_configured": true, 00:13:20.449 "data_offset": 2048, 00:13:20.449 "data_size": 63488 00:13:20.449 } 00:13:20.449 ] 00:13:20.449 } 00:13:20.449 } 00:13:20.449 }' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:20.449 BaseBdev2 00:13:20.449 BaseBdev3 00:13:20.449 BaseBdev4' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.449 09:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.449 [2024-10-30 09:47:59.026602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.449 [2024-10-30 09:47:59.026632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.449 [2024-10-30 09:47:59.026693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.449 [2024-10-30 09:47:59.026933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.449 [2024-10-30 09:47:59.026948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81119 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81119 ']' 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81119 00:13:20.449 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81119 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:20.450 killing process with pid 81119 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81119' 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81119 00:13:20.450 [2024-10-30 09:47:59.056367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.450 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81119 00:13:20.706 [2024-10-30 09:47:59.249377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.271 09:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:21.271 00:13:21.271 real 0m8.088s 00:13:21.271 user 0m13.058s 00:13:21.271 sys 0m1.365s 00:13:21.271 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.271 09:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.271 ************************************ 00:13:21.271 END TEST raid5f_state_function_test_sb 00:13:21.271 ************************************ 00:13:21.271 09:47:59 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:13:21.271 09:47:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:21.271 09:47:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.271 09:47:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.271 ************************************ 00:13:21.271 START TEST raid5f_superblock_test 00:13:21.271 ************************************ 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81757 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81757 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81757 ']' 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.271 09:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:21.528 [2024-10-30 09:47:59.926545] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:13:21.528 [2024-10-30 09:47:59.926669] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81757 ] 00:13:21.528 [2024-10-30 09:48:00.087248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.785 [2024-10-30 09:48:00.185997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.785 [2024-10-30 09:48:00.321366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.785 [2024-10-30 09:48:00.321415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 malloc1 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 [2024-10-30 09:48:00.762328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:22.351 [2024-10-30 09:48:00.762386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.351 [2024-10-30 09:48:00.762407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.351 [2024-10-30 09:48:00.762416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.351 [2024-10-30 09:48:00.764519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.351 [2024-10-30 09:48:00.764553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:22.351 pt1 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 malloc2 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.351 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 [2024-10-30 09:48:00.797860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.351 [2024-10-30 09:48:00.798017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.351 [2024-10-30 09:48:00.798042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.351 [2024-10-30 09:48:00.798051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.351 [2024-10-30 09:48:00.800131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.352 [2024-10-30 09:48:00.800162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.352 pt2 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 malloc3 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 [2024-10-30 09:48:00.849220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:22.352 [2024-10-30 09:48:00.849267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.352 [2024-10-30 09:48:00.849289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:22.352 [2024-10-30 09:48:00.849298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.352 [2024-10-30 09:48:00.851352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.352 [2024-10-30 09:48:00.851385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:22.352 pt3 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 malloc4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 [2024-10-30 09:48:00.885228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:22.352 [2024-10-30 09:48:00.885272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.352 [2024-10-30 09:48:00.885287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:22.352 [2024-10-30 09:48:00.885295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.352 [2024-10-30 09:48:00.887333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.352 [2024-10-30 09:48:00.887363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:22.352 pt4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 [2024-10-30 09:48:00.893266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:22.352 [2024-10-30 09:48:00.895051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.352 [2024-10-30 09:48:00.895125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:22.352 [2024-10-30 09:48:00.895185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:22.352 [2024-10-30 09:48:00.895369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.352 [2024-10-30 09:48:00.895383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:22.352 [2024-10-30 09:48:00.895624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:22.352 [2024-10-30 09:48:00.900510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.352 [2024-10-30 09:48:00.900636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.352 [2024-10-30 09:48:00.900823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.352 "name": "raid_bdev1", 00:13:22.352 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:22.352 "strip_size_kb": 64, 00:13:22.352 "state": "online", 00:13:22.352 "raid_level": "raid5f", 00:13:22.352 "superblock": true, 00:13:22.352 "num_base_bdevs": 4, 00:13:22.352 "num_base_bdevs_discovered": 4, 00:13:22.352 "num_base_bdevs_operational": 4, 00:13:22.352 "base_bdevs_list": [ 00:13:22.352 { 00:13:22.352 "name": "pt1", 00:13:22.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.352 "is_configured": true, 00:13:22.352 "data_offset": 2048, 00:13:22.352 "data_size": 63488 00:13:22.352 }, 00:13:22.352 { 00:13:22.352 "name": "pt2", 00:13:22.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.352 "is_configured": true, 00:13:22.352 "data_offset": 2048, 00:13:22.352 "data_size": 63488 00:13:22.352 }, 00:13:22.352 { 00:13:22.352 "name": "pt3", 00:13:22.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.352 "is_configured": true, 00:13:22.352 "data_offset": 2048, 00:13:22.352 "data_size": 63488 00:13:22.352 }, 00:13:22.352 { 00:13:22.352 "name": "pt4", 00:13:22.352 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:22.352 "is_configured": true, 00:13:22.352 "data_offset": 2048, 00:13:22.352 "data_size": 63488 00:13:22.352 } 00:13:22.352 ] 00:13:22.352 }' 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.352 09:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.610 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.611 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.611 [2024-10-30 09:48:01.218346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.868 "name": "raid_bdev1", 00:13:22.868 "aliases": [ 00:13:22.868 "d9e4eebd-6998-40e3-a8b8-9996412dfbcc" 00:13:22.868 ], 00:13:22.868 "product_name": "Raid Volume", 00:13:22.868 "block_size": 512, 00:13:22.868 "num_blocks": 190464, 00:13:22.868 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:22.868 "assigned_rate_limits": { 00:13:22.868 "rw_ios_per_sec": 0, 00:13:22.868 "rw_mbytes_per_sec": 0, 00:13:22.868 "r_mbytes_per_sec": 0, 00:13:22.868 "w_mbytes_per_sec": 0 00:13:22.868 }, 00:13:22.868 "claimed": false, 00:13:22.868 "zoned": false, 00:13:22.868 "supported_io_types": { 00:13:22.868 "read": true, 00:13:22.868 "write": true, 00:13:22.868 "unmap": false, 00:13:22.868 "flush": false, 00:13:22.868 "reset": true, 00:13:22.868 "nvme_admin": false, 00:13:22.868 "nvme_io": false, 00:13:22.868 "nvme_io_md": false, 00:13:22.868 "write_zeroes": true, 00:13:22.868 "zcopy": false, 00:13:22.868 "get_zone_info": false, 00:13:22.868 "zone_management": false, 00:13:22.868 "zone_append": false, 00:13:22.868 "compare": false, 00:13:22.868 "compare_and_write": false, 00:13:22.868 "abort": false, 00:13:22.868 "seek_hole": false, 00:13:22.868 "seek_data": false, 00:13:22.868 "copy": false, 00:13:22.868 "nvme_iov_md": false 00:13:22.868 }, 00:13:22.868 "driver_specific": { 00:13:22.868 "raid": { 00:13:22.868 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:22.868 "strip_size_kb": 64, 00:13:22.868 "state": "online", 00:13:22.868 "raid_level": "raid5f", 00:13:22.868 "superblock": true, 00:13:22.868 "num_base_bdevs": 4, 00:13:22.868 "num_base_bdevs_discovered": 4, 00:13:22.868 "num_base_bdevs_operational": 4, 00:13:22.868 "base_bdevs_list": [ 00:13:22.868 { 00:13:22.868 "name": "pt1", 00:13:22.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.868 "is_configured": true, 00:13:22.868 "data_offset": 2048, 00:13:22.868 "data_size": 63488 00:13:22.868 }, 00:13:22.868 { 00:13:22.868 "name": "pt2", 00:13:22.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.868 "is_configured": true, 00:13:22.868 "data_offset": 2048, 00:13:22.868 "data_size": 63488 00:13:22.868 }, 00:13:22.868 { 00:13:22.868 "name": "pt3", 00:13:22.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.868 "is_configured": true, 00:13:22.868 "data_offset": 2048, 00:13:22.868 "data_size": 63488 00:13:22.868 }, 00:13:22.868 { 00:13:22.868 "name": "pt4", 00:13:22.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:22.868 "is_configured": true, 00:13:22.868 "data_offset": 2048, 00:13:22.868 "data_size": 63488 00:13:22.868 } 00:13:22.868 ] 00:13:22.868 } 00:13:22.868 } 00:13:22.868 }' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:22.868 pt2 00:13:22.868 pt3 00:13:22.868 pt4' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 [2024-10-30 09:48:01.458365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d9e4eebd-6998-40e3-a8b8-9996412dfbcc 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d9e4eebd-6998-40e3-a8b8-9996412dfbcc ']' 00:13:22.868 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.869 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.869 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.869 [2024-10-30 09:48:01.486182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.869 [2024-10-30 09:48:01.486201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.869 [2024-10-30 09:48:01.486268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.869 [2024-10-30 09:48:01.486352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.869 [2024-10-30 09:48:01.486366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 [2024-10-30 09:48:01.594237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:23.126 [2024-10-30 09:48:01.596179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:23.126 [2024-10-30 09:48:01.596247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:23.126 [2024-10-30 09:48:01.596370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:23.126 [2024-10-30 09:48:01.596469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:23.126 [2024-10-30 09:48:01.596586] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:23.126 [2024-10-30 09:48:01.596703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:23.126 [2024-10-30 09:48:01.596782] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:23.126 [2024-10-30 09:48:01.596821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.126 [2024-10-30 09:48:01.596869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:23.126 request: 00:13:23.126 { 00:13:23.126 "name": "raid_bdev1", 00:13:23.126 "raid_level": "raid5f", 00:13:23.126 "base_bdevs": [ 00:13:23.126 "malloc1", 00:13:23.126 "malloc2", 00:13:23.126 "malloc3", 00:13:23.126 "malloc4" 00:13:23.126 ], 00:13:23.126 "strip_size_kb": 64, 00:13:23.126 "superblock": false, 00:13:23.126 "method": "bdev_raid_create", 00:13:23.126 "req_id": 1 00:13:23.126 } 00:13:23.126 Got JSON-RPC error response 00:13:23.126 response: 00:13:23.126 { 00:13:23.126 "code": -17, 00:13:23.126 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:23.126 } 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:23.126 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.127 [2024-10-30 09:48:01.634228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:23.127 [2024-10-30 09:48:01.634273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.127 [2024-10-30 09:48:01.634288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:23.127 [2024-10-30 09:48:01.634298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.127 [2024-10-30 09:48:01.636414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.127 [2024-10-30 09:48:01.636449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:23.127 [2024-10-30 09:48:01.636513] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:23.127 [2024-10-30 09:48:01.636561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:23.127 pt1 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.127 "name": "raid_bdev1", 00:13:23.127 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:23.127 "strip_size_kb": 64, 00:13:23.127 "state": "configuring", 00:13:23.127 "raid_level": "raid5f", 00:13:23.127 "superblock": true, 00:13:23.127 "num_base_bdevs": 4, 00:13:23.127 "num_base_bdevs_discovered": 1, 00:13:23.127 "num_base_bdevs_operational": 4, 00:13:23.127 "base_bdevs_list": [ 00:13:23.127 { 00:13:23.127 "name": "pt1", 00:13:23.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.127 "is_configured": true, 00:13:23.127 "data_offset": 2048, 00:13:23.127 "data_size": 63488 00:13:23.127 }, 00:13:23.127 { 00:13:23.127 "name": null, 00:13:23.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.127 "is_configured": false, 00:13:23.127 "data_offset": 2048, 00:13:23.127 "data_size": 63488 00:13:23.127 }, 00:13:23.127 { 00:13:23.127 "name": null, 00:13:23.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.127 "is_configured": false, 00:13:23.127 "data_offset": 2048, 00:13:23.127 "data_size": 63488 00:13:23.127 }, 00:13:23.127 { 00:13:23.127 "name": null, 00:13:23.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:23.127 "is_configured": false, 00:13:23.127 "data_offset": 2048, 00:13:23.127 "data_size": 63488 00:13:23.127 } 00:13:23.127 ] 00:13:23.127 }' 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.127 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.459 [2024-10-30 09:48:01.954327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.459 [2024-10-30 09:48:01.954390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.459 [2024-10-30 09:48:01.954407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:23.459 [2024-10-30 09:48:01.954417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.459 [2024-10-30 09:48:01.954802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.459 [2024-10-30 09:48:01.954818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.459 [2024-10-30 09:48:01.954882] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:23.459 [2024-10-30 09:48:01.954903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.459 pt2 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.459 [2024-10-30 09:48:01.962339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.459 09:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.459 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.459 "name": "raid_bdev1", 00:13:23.459 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:23.459 "strip_size_kb": 64, 00:13:23.459 "state": "configuring", 00:13:23.459 "raid_level": "raid5f", 00:13:23.459 "superblock": true, 00:13:23.459 "num_base_bdevs": 4, 00:13:23.459 "num_base_bdevs_discovered": 1, 00:13:23.459 "num_base_bdevs_operational": 4, 00:13:23.459 "base_bdevs_list": [ 00:13:23.459 { 00:13:23.459 "name": "pt1", 00:13:23.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.459 "is_configured": true, 00:13:23.459 "data_offset": 2048, 00:13:23.459 "data_size": 63488 00:13:23.459 }, 00:13:23.459 { 00:13:23.459 "name": null, 00:13:23.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.459 "is_configured": false, 00:13:23.459 "data_offset": 0, 00:13:23.459 "data_size": 63488 00:13:23.459 }, 00:13:23.459 { 00:13:23.459 "name": null, 00:13:23.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.459 "is_configured": false, 00:13:23.459 "data_offset": 2048, 00:13:23.459 "data_size": 63488 00:13:23.459 }, 00:13:23.459 { 00:13:23.459 "name": null, 00:13:23.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:23.459 "is_configured": false, 00:13:23.459 "data_offset": 2048, 00:13:23.459 "data_size": 63488 00:13:23.459 } 00:13:23.459 ] 00:13:23.459 }' 00:13:23.459 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.459 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.732 [2024-10-30 09:48:02.298414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.732 [2024-10-30 09:48:02.298468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.732 [2024-10-30 09:48:02.298485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:23.732 [2024-10-30 09:48:02.298493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.732 [2024-10-30 09:48:02.298886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.732 [2024-10-30 09:48:02.298899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.732 [2024-10-30 09:48:02.298966] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:23.732 [2024-10-30 09:48:02.298984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.732 pt2 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.732 [2024-10-30 09:48:02.306397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:23.732 [2024-10-30 09:48:02.306437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.732 [2024-10-30 09:48:02.306452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:23.732 [2024-10-30 09:48:02.306460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.732 [2024-10-30 09:48:02.306801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.732 [2024-10-30 09:48:02.306825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:23.732 [2024-10-30 09:48:02.306880] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:23.732 [2024-10-30 09:48:02.306896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:23.732 pt3 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.732 [2024-10-30 09:48:02.314380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:23.732 [2024-10-30 09:48:02.314420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.732 [2024-10-30 09:48:02.314433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:23.732 [2024-10-30 09:48:02.314441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.732 [2024-10-30 09:48:02.314780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.732 [2024-10-30 09:48:02.314803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:23.732 [2024-10-30 09:48:02.314856] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:23.732 [2024-10-30 09:48:02.314872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:23.732 [2024-10-30 09:48:02.314998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.732 [2024-10-30 09:48:02.315010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:23.732 [2024-10-30 09:48:02.315248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:23.732 [2024-10-30 09:48:02.319769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.732 pt4 00:13:23.732 [2024-10-30 09:48:02.319894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:23.732 [2024-10-30 09:48:02.320082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.732 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.991 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.991 "name": "raid_bdev1", 00:13:23.991 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:23.991 "strip_size_kb": 64, 00:13:23.991 "state": "online", 00:13:23.991 "raid_level": "raid5f", 00:13:23.991 "superblock": true, 00:13:23.991 "num_base_bdevs": 4, 00:13:23.991 "num_base_bdevs_discovered": 4, 00:13:23.991 "num_base_bdevs_operational": 4, 00:13:23.991 "base_bdevs_list": [ 00:13:23.991 { 00:13:23.991 "name": "pt1", 00:13:23.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.991 "is_configured": true, 00:13:23.992 "data_offset": 2048, 00:13:23.992 "data_size": 63488 00:13:23.992 }, 00:13:23.992 { 00:13:23.992 "name": "pt2", 00:13:23.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.992 "is_configured": true, 00:13:23.992 "data_offset": 2048, 00:13:23.992 "data_size": 63488 00:13:23.992 }, 00:13:23.992 { 00:13:23.992 "name": "pt3", 00:13:23.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.992 "is_configured": true, 00:13:23.992 "data_offset": 2048, 00:13:23.992 "data_size": 63488 00:13:23.992 }, 00:13:23.992 { 00:13:23.992 "name": "pt4", 00:13:23.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:23.992 "is_configured": true, 00:13:23.992 "data_offset": 2048, 00:13:23.992 "data_size": 63488 00:13:23.992 } 00:13:23.992 ] 00:13:23.992 }' 00:13:23.992 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.992 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.250 [2024-10-30 09:48:02.641571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.250 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.250 "name": "raid_bdev1", 00:13:24.250 "aliases": [ 00:13:24.250 "d9e4eebd-6998-40e3-a8b8-9996412dfbcc" 00:13:24.250 ], 00:13:24.250 "product_name": "Raid Volume", 00:13:24.250 "block_size": 512, 00:13:24.250 "num_blocks": 190464, 00:13:24.250 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:24.250 "assigned_rate_limits": { 00:13:24.250 "rw_ios_per_sec": 0, 00:13:24.250 "rw_mbytes_per_sec": 0, 00:13:24.250 "r_mbytes_per_sec": 0, 00:13:24.250 "w_mbytes_per_sec": 0 00:13:24.250 }, 00:13:24.250 "claimed": false, 00:13:24.250 "zoned": false, 00:13:24.250 "supported_io_types": { 00:13:24.250 "read": true, 00:13:24.250 "write": true, 00:13:24.250 "unmap": false, 00:13:24.250 "flush": false, 00:13:24.250 "reset": true, 00:13:24.250 "nvme_admin": false, 00:13:24.250 "nvme_io": false, 00:13:24.250 "nvme_io_md": false, 00:13:24.250 "write_zeroes": true, 00:13:24.250 "zcopy": false, 00:13:24.250 "get_zone_info": false, 00:13:24.250 "zone_management": false, 00:13:24.250 "zone_append": false, 00:13:24.250 "compare": false, 00:13:24.250 "compare_and_write": false, 00:13:24.250 "abort": false, 00:13:24.250 "seek_hole": false, 00:13:24.250 "seek_data": false, 00:13:24.250 "copy": false, 00:13:24.250 "nvme_iov_md": false 00:13:24.250 }, 00:13:24.250 "driver_specific": { 00:13:24.250 "raid": { 00:13:24.250 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:24.250 "strip_size_kb": 64, 00:13:24.250 "state": "online", 00:13:24.250 "raid_level": "raid5f", 00:13:24.250 "superblock": true, 00:13:24.250 "num_base_bdevs": 4, 00:13:24.250 "num_base_bdevs_discovered": 4, 00:13:24.250 "num_base_bdevs_operational": 4, 00:13:24.250 "base_bdevs_list": [ 00:13:24.250 { 00:13:24.250 "name": "pt1", 00:13:24.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.250 "is_configured": true, 00:13:24.250 "data_offset": 2048, 00:13:24.250 "data_size": 63488 00:13:24.250 }, 00:13:24.250 { 00:13:24.250 "name": "pt2", 00:13:24.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "name": "pt3", 00:13:24.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "name": "pt4", 00:13:24.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 } 00:13:24.251 ] 00:13:24.251 } 00:13:24.251 } 00:13:24.251 }' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.251 pt2 00:13:24.251 pt3 00:13:24.251 pt4' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 [2024-10-30 09:48:02.857587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d9e4eebd-6998-40e3-a8b8-9996412dfbcc '!=' d9e4eebd-6998-40e3-a8b8-9996412dfbcc ']' 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.509 [2024-10-30 09:48:02.889449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.509 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.509 "name": "raid_bdev1", 00:13:24.509 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:24.509 "strip_size_kb": 64, 00:13:24.509 "state": "online", 00:13:24.509 "raid_level": "raid5f", 00:13:24.509 "superblock": true, 00:13:24.509 "num_base_bdevs": 4, 00:13:24.509 "num_base_bdevs_discovered": 3, 00:13:24.509 "num_base_bdevs_operational": 3, 00:13:24.509 "base_bdevs_list": [ 00:13:24.509 { 00:13:24.509 "name": null, 00:13:24.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.509 "is_configured": false, 00:13:24.509 "data_offset": 0, 00:13:24.509 "data_size": 63488 00:13:24.509 }, 00:13:24.509 { 00:13:24.509 "name": "pt2", 00:13:24.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.509 "is_configured": true, 00:13:24.509 "data_offset": 2048, 00:13:24.509 "data_size": 63488 00:13:24.509 }, 00:13:24.509 { 00:13:24.509 "name": "pt3", 00:13:24.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.509 "is_configured": true, 00:13:24.509 "data_offset": 2048, 00:13:24.509 "data_size": 63488 00:13:24.509 }, 00:13:24.509 { 00:13:24.509 "name": "pt4", 00:13:24.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.510 "is_configured": true, 00:13:24.510 "data_offset": 2048, 00:13:24.510 "data_size": 63488 00:13:24.510 } 00:13:24.510 ] 00:13:24.510 }' 00:13:24.510 09:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.510 09:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.767 [2024-10-30 09:48:03.209479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.767 [2024-10-30 09:48:03.209593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.767 [2024-10-30 09:48:03.209698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.767 [2024-10-30 09:48:03.209776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.767 [2024-10-30 09:48:03.209803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.767 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 [2024-10-30 09:48:03.277504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:24.768 [2024-10-30 09:48:03.277554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.768 [2024-10-30 09:48:03.277568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:24.768 [2024-10-30 09:48:03.277575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.768 [2024-10-30 09:48:03.279399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.768 [2024-10-30 09:48:03.279430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:24.768 [2024-10-30 09:48:03.279496] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:24.768 [2024-10-30 09:48:03.279531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.768 pt2 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.768 "name": "raid_bdev1", 00:13:24.768 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:24.768 "strip_size_kb": 64, 00:13:24.768 "state": "configuring", 00:13:24.768 "raid_level": "raid5f", 00:13:24.768 "superblock": true, 00:13:24.768 "num_base_bdevs": 4, 00:13:24.768 "num_base_bdevs_discovered": 1, 00:13:24.768 "num_base_bdevs_operational": 3, 00:13:24.768 "base_bdevs_list": [ 00:13:24.768 { 00:13:24.768 "name": null, 00:13:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.768 "is_configured": false, 00:13:24.768 "data_offset": 2048, 00:13:24.768 "data_size": 63488 00:13:24.768 }, 00:13:24.768 { 00:13:24.768 "name": "pt2", 00:13:24.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.768 "is_configured": true, 00:13:24.768 "data_offset": 2048, 00:13:24.768 "data_size": 63488 00:13:24.768 }, 00:13:24.768 { 00:13:24.768 "name": null, 00:13:24.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.768 "is_configured": false, 00:13:24.768 "data_offset": 2048, 00:13:24.768 "data_size": 63488 00:13:24.768 }, 00:13:24.768 { 00:13:24.768 "name": null, 00:13:24.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:24.768 "is_configured": false, 00:13:24.768 "data_offset": 2048, 00:13:24.768 "data_size": 63488 00:13:24.768 } 00:13:24.768 ] 00:13:24.768 }' 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.768 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.027 [2024-10-30 09:48:03.625578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:25.027 [2024-10-30 09:48:03.625627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.027 [2024-10-30 09:48:03.625645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:25.027 [2024-10-30 09:48:03.625652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.027 [2024-10-30 09:48:03.625989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.027 [2024-10-30 09:48:03.626007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:25.027 [2024-10-30 09:48:03.626081] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:25.027 [2024-10-30 09:48:03.626101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.027 pt3 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.027 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.284 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.284 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.284 "name": "raid_bdev1", 00:13:25.284 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:25.284 "strip_size_kb": 64, 00:13:25.284 "state": "configuring", 00:13:25.284 "raid_level": "raid5f", 00:13:25.284 "superblock": true, 00:13:25.284 "num_base_bdevs": 4, 00:13:25.284 "num_base_bdevs_discovered": 2, 00:13:25.284 "num_base_bdevs_operational": 3, 00:13:25.284 "base_bdevs_list": [ 00:13:25.284 { 00:13:25.284 "name": null, 00:13:25.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.284 "is_configured": false, 00:13:25.284 "data_offset": 2048, 00:13:25.284 "data_size": 63488 00:13:25.284 }, 00:13:25.284 { 00:13:25.284 "name": "pt2", 00:13:25.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.284 "is_configured": true, 00:13:25.284 "data_offset": 2048, 00:13:25.284 "data_size": 63488 00:13:25.284 }, 00:13:25.284 { 00:13:25.284 "name": "pt3", 00:13:25.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.284 "is_configured": true, 00:13:25.284 "data_offset": 2048, 00:13:25.284 "data_size": 63488 00:13:25.284 }, 00:13:25.284 { 00:13:25.284 "name": null, 00:13:25.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.284 "is_configured": false, 00:13:25.284 "data_offset": 2048, 00:13:25.284 "data_size": 63488 00:13:25.284 } 00:13:25.284 ] 00:13:25.284 }' 00:13:25.284 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.284 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.543 [2024-10-30 09:48:03.937648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:25.543 [2024-10-30 09:48:03.937694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.543 [2024-10-30 09:48:03.937711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:25.543 [2024-10-30 09:48:03.937718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.543 [2024-10-30 09:48:03.938077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.543 [2024-10-30 09:48:03.938088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:25.543 [2024-10-30 09:48:03.938148] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:25.543 [2024-10-30 09:48:03.938165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:25.543 [2024-10-30 09:48:03.938268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:25.543 [2024-10-30 09:48:03.938275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:25.543 [2024-10-30 09:48:03.938471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:25.543 [2024-10-30 09:48:03.942301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:25.543 [2024-10-30 09:48:03.942318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:25.543 [2024-10-30 09:48:03.942528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.543 pt4 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.543 "name": "raid_bdev1", 00:13:25.543 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:25.543 "strip_size_kb": 64, 00:13:25.543 "state": "online", 00:13:25.543 "raid_level": "raid5f", 00:13:25.543 "superblock": true, 00:13:25.543 "num_base_bdevs": 4, 00:13:25.543 "num_base_bdevs_discovered": 3, 00:13:25.543 "num_base_bdevs_operational": 3, 00:13:25.543 "base_bdevs_list": [ 00:13:25.543 { 00:13:25.543 "name": null, 00:13:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.543 "is_configured": false, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": "pt2", 00:13:25.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.543 "is_configured": true, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": "pt3", 00:13:25.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.543 "is_configured": true, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": "pt4", 00:13:25.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.543 "is_configured": true, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 } 00:13:25.543 ] 00:13:25.543 }' 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.543 09:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.800 [2024-10-30 09:48:04.250713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.800 [2024-10-30 09:48:04.250733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.800 [2024-10-30 09:48:04.250790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.800 [2024-10-30 09:48:04.250848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.800 [2024-10-30 09:48:04.250857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:25.800 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.801 [2024-10-30 09:48:04.302714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:25.801 [2024-10-30 09:48:04.302761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.801 [2024-10-30 09:48:04.302777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:25.801 [2024-10-30 09:48:04.302786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.801 [2024-10-30 09:48:04.304604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.801 [2024-10-30 09:48:04.304636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:25.801 [2024-10-30 09:48:04.304695] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:25.801 [2024-10-30 09:48:04.304732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.801 [2024-10-30 09:48:04.304823] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:25.801 [2024-10-30 09:48:04.304833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.801 [2024-10-30 09:48:04.304844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:25.801 [2024-10-30 09:48:04.304884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.801 [2024-10-30 09:48:04.304978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.801 pt1 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.801 "name": "raid_bdev1", 00:13:25.801 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:25.801 "strip_size_kb": 64, 00:13:25.801 "state": "configuring", 00:13:25.801 "raid_level": "raid5f", 00:13:25.801 "superblock": true, 00:13:25.801 "num_base_bdevs": 4, 00:13:25.801 "num_base_bdevs_discovered": 2, 00:13:25.801 "num_base_bdevs_operational": 3, 00:13:25.801 "base_bdevs_list": [ 00:13:25.801 { 00:13:25.801 "name": null, 00:13:25.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.801 "is_configured": false, 00:13:25.801 "data_offset": 2048, 00:13:25.801 "data_size": 63488 00:13:25.801 }, 00:13:25.801 { 00:13:25.801 "name": "pt2", 00:13:25.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.801 "is_configured": true, 00:13:25.801 "data_offset": 2048, 00:13:25.801 "data_size": 63488 00:13:25.801 }, 00:13:25.801 { 00:13:25.801 "name": "pt3", 00:13:25.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.801 "is_configured": true, 00:13:25.801 "data_offset": 2048, 00:13:25.801 "data_size": 63488 00:13:25.801 }, 00:13:25.801 { 00:13:25.801 "name": null, 00:13:25.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.801 "is_configured": false, 00:13:25.801 "data_offset": 2048, 00:13:25.801 "data_size": 63488 00:13:25.801 } 00:13:25.801 ] 00:13:25.801 }' 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.801 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.058 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.058 [2024-10-30 09:48:04.646797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:26.058 [2024-10-30 09:48:04.646938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.058 [2024-10-30 09:48:04.646963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:26.058 [2024-10-30 09:48:04.646970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.058 [2024-10-30 09:48:04.647315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.058 [2024-10-30 09:48:04.647327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:26.058 [2024-10-30 09:48:04.647389] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:26.058 [2024-10-30 09:48:04.647408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:26.058 [2024-10-30 09:48:04.647508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:26.059 [2024-10-30 09:48:04.647514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:26.059 [2024-10-30 09:48:04.647701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:26.059 [2024-10-30 09:48:04.651451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:26.059 [2024-10-30 09:48:04.651469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:26.059 [2024-10-30 09:48:04.651671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.059 pt4 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.059 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.316 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.316 "name": "raid_bdev1", 00:13:26.316 "uuid": "d9e4eebd-6998-40e3-a8b8-9996412dfbcc", 00:13:26.316 "strip_size_kb": 64, 00:13:26.316 "state": "online", 00:13:26.316 "raid_level": "raid5f", 00:13:26.316 "superblock": true, 00:13:26.316 "num_base_bdevs": 4, 00:13:26.316 "num_base_bdevs_discovered": 3, 00:13:26.316 "num_base_bdevs_operational": 3, 00:13:26.316 "base_bdevs_list": [ 00:13:26.316 { 00:13:26.316 "name": null, 00:13:26.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.316 "is_configured": false, 00:13:26.316 "data_offset": 2048, 00:13:26.316 "data_size": 63488 00:13:26.316 }, 00:13:26.316 { 00:13:26.316 "name": "pt2", 00:13:26.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.316 "is_configured": true, 00:13:26.316 "data_offset": 2048, 00:13:26.316 "data_size": 63488 00:13:26.316 }, 00:13:26.316 { 00:13:26.316 "name": "pt3", 00:13:26.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.316 "is_configured": true, 00:13:26.316 "data_offset": 2048, 00:13:26.316 "data_size": 63488 00:13:26.316 }, 00:13:26.316 { 00:13:26.316 "name": "pt4", 00:13:26.316 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.316 "is_configured": true, 00:13:26.316 "data_offset": 2048, 00:13:26.316 "data_size": 63488 00:13:26.316 } 00:13:26.316 ] 00:13:26.316 }' 00:13:26.316 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.316 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.574 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:26.574 09:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:26.574 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.574 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.574 09:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:26.574 [2024-10-30 09:48:05.008081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d9e4eebd-6998-40e3-a8b8-9996412dfbcc '!=' d9e4eebd-6998-40e3-a8b8-9996412dfbcc ']' 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81757 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81757 ']' 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81757 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81757 00:13:26.574 killing process with pid 81757 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81757' 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81757 00:13:26.574 [2024-10-30 09:48:05.060383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.574 [2024-10-30 09:48:05.060447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.574 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81757 00:13:26.574 [2024-10-30 09:48:05.060505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.574 [2024-10-30 09:48:05.060515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:26.832 [2024-10-30 09:48:05.248537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.436 ************************************ 00:13:27.436 END TEST raid5f_superblock_test 00:13:27.436 ************************************ 00:13:27.436 09:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:27.436 00:13:27.436 real 0m5.938s 00:13:27.436 user 0m9.465s 00:13:27.436 sys 0m0.987s 00:13:27.436 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:27.436 09:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.436 09:48:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:27.436 09:48:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:13:27.436 09:48:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:27.436 09:48:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.436 09:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.436 ************************************ 00:13:27.436 START TEST raid5f_rebuild_test 00:13:27.437 ************************************ 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82217 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82217 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82217 ']' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.437 09:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:27.437 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.437 Zero copy mechanism will not be used. 00:13:27.437 [2024-10-30 09:48:05.926049] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:13:27.437 [2024-10-30 09:48:05.926154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82217 ] 00:13:27.694 [2024-10-30 09:48:06.075552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.694 [2024-10-30 09:48:06.157716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.694 [2024-10-30 09:48:06.265780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.694 [2024-10-30 09:48:06.265949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.259 BaseBdev1_malloc 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.259 [2024-10-30 09:48:06.807390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.259 [2024-10-30 09:48:06.807446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.259 [2024-10-30 09:48:06.807464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:28.259 [2024-10-30 09:48:06.807473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.259 [2024-10-30 09:48:06.809286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.259 [2024-10-30 09:48:06.809318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.259 BaseBdev1 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.259 BaseBdev2_malloc 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.259 [2024-10-30 09:48:06.838780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:28.259 [2024-10-30 09:48:06.838825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.259 [2024-10-30 09:48:06.838839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:28.259 [2024-10-30 09:48:06.838847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.259 [2024-10-30 09:48:06.840601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.259 [2024-10-30 09:48:06.840731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.259 BaseBdev2 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.259 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 BaseBdev3_malloc 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 [2024-10-30 09:48:06.890618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:28.518 [2024-10-30 09:48:06.890665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.518 [2024-10-30 09:48:06.890683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:28.518 [2024-10-30 09:48:06.890692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.518 [2024-10-30 09:48:06.892425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.518 [2024-10-30 09:48:06.892459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:28.518 BaseBdev3 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 BaseBdev4_malloc 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 [2024-10-30 09:48:06.922283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:28.518 [2024-10-30 09:48:06.922325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.518 [2024-10-30 09:48:06.922339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:28.518 [2024-10-30 09:48:06.922348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.518 [2024-10-30 09:48:06.924128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.518 [2024-10-30 09:48:06.924160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:28.518 BaseBdev4 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 spare_malloc 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 spare_delay 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 [2024-10-30 09:48:06.961818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.518 [2024-10-30 09:48:06.961959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.518 [2024-10-30 09:48:06.961978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:28.518 [2024-10-30 09:48:06.961987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.518 [2024-10-30 09:48:06.963806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.518 [2024-10-30 09:48:06.963839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.518 spare 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 [2024-10-30 09:48:06.969873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.518 [2024-10-30 09:48:06.971406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.518 [2024-10-30 09:48:06.971453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.518 [2024-10-30 09:48:06.971495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:28.518 [2024-10-30 09:48:06.971564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:28.518 [2024-10-30 09:48:06.971574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:28.518 [2024-10-30 09:48:06.971783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:28.518 [2024-10-30 09:48:06.975882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:28.518 [2024-10-30 09:48:06.975898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:28.518 [2024-10-30 09:48:06.976050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 09:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.518 "name": "raid_bdev1", 00:13:28.518 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:28.518 "strip_size_kb": 64, 00:13:28.518 "state": "online", 00:13:28.518 "raid_level": "raid5f", 00:13:28.518 "superblock": false, 00:13:28.518 "num_base_bdevs": 4, 00:13:28.518 "num_base_bdevs_discovered": 4, 00:13:28.518 "num_base_bdevs_operational": 4, 00:13:28.518 "base_bdevs_list": [ 00:13:28.518 { 00:13:28.518 "name": "BaseBdev1", 00:13:28.518 "uuid": "611cc7ec-56a8-5969-9d62-3b0c70e87e0e", 00:13:28.518 "is_configured": true, 00:13:28.519 "data_offset": 0, 00:13:28.519 "data_size": 65536 00:13:28.519 }, 00:13:28.519 { 00:13:28.519 "name": "BaseBdev2", 00:13:28.519 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:28.519 "is_configured": true, 00:13:28.519 "data_offset": 0, 00:13:28.519 "data_size": 65536 00:13:28.519 }, 00:13:28.519 { 00:13:28.519 "name": "BaseBdev3", 00:13:28.519 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:28.519 "is_configured": true, 00:13:28.519 "data_offset": 0, 00:13:28.519 "data_size": 65536 00:13:28.519 }, 00:13:28.519 { 00:13:28.519 "name": "BaseBdev4", 00:13:28.519 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:28.519 "is_configured": true, 00:13:28.519 "data_offset": 0, 00:13:28.519 "data_size": 65536 00:13:28.519 } 00:13:28.519 ] 00:13:28.519 }' 00:13:28.519 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.519 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.777 [2024-10-30 09:48:07.300563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.777 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:29.035 [2024-10-30 09:48:07.536455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:29.035 /dev/nbd0 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.035 1+0 records in 00:13:29.035 1+0 records out 00:13:29.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182347 s, 22.5 MB/s 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:13:29.035 09:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:13:29.601 512+0 records in 00:13:29.601 512+0 records out 00:13:29.601 100663296 bytes (101 MB, 96 MiB) copied, 0.474195 s, 212 MB/s 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.601 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.859 [2024-10-30 09:48:08.258569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.859 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.860 [2024-10-30 09:48:08.298888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.860 "name": "raid_bdev1", 00:13:29.860 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:29.860 "strip_size_kb": 64, 00:13:29.860 "state": "online", 00:13:29.860 "raid_level": "raid5f", 00:13:29.860 "superblock": false, 00:13:29.860 "num_base_bdevs": 4, 00:13:29.860 "num_base_bdevs_discovered": 3, 00:13:29.860 "num_base_bdevs_operational": 3, 00:13:29.860 "base_bdevs_list": [ 00:13:29.860 { 00:13:29.860 "name": null, 00:13:29.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.860 "is_configured": false, 00:13:29.860 "data_offset": 0, 00:13:29.860 "data_size": 65536 00:13:29.860 }, 00:13:29.860 { 00:13:29.860 "name": "BaseBdev2", 00:13:29.860 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:29.860 "is_configured": true, 00:13:29.860 "data_offset": 0, 00:13:29.860 "data_size": 65536 00:13:29.860 }, 00:13:29.860 { 00:13:29.860 "name": "BaseBdev3", 00:13:29.860 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:29.860 "is_configured": true, 00:13:29.860 "data_offset": 0, 00:13:29.860 "data_size": 65536 00:13:29.860 }, 00:13:29.860 { 00:13:29.860 "name": "BaseBdev4", 00:13:29.860 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:29.860 "is_configured": true, 00:13:29.860 "data_offset": 0, 00:13:29.860 "data_size": 65536 00:13:29.860 } 00:13:29.860 ] 00:13:29.860 }' 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.860 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.118 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.118 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 [2024-10-30 09:48:08.618941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.118 [2024-10-30 09:48:08.627161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:13:30.118 09:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.118 09:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:30.118 [2024-10-30 09:48:08.632665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.052 "name": "raid_bdev1", 00:13:31.052 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:31.052 "strip_size_kb": 64, 00:13:31.052 "state": "online", 00:13:31.052 "raid_level": "raid5f", 00:13:31.052 "superblock": false, 00:13:31.052 "num_base_bdevs": 4, 00:13:31.052 "num_base_bdevs_discovered": 4, 00:13:31.052 "num_base_bdevs_operational": 4, 00:13:31.052 "process": { 00:13:31.052 "type": "rebuild", 00:13:31.052 "target": "spare", 00:13:31.052 "progress": { 00:13:31.052 "blocks": 19200, 00:13:31.052 "percent": 9 00:13:31.052 } 00:13:31.052 }, 00:13:31.052 "base_bdevs_list": [ 00:13:31.052 { 00:13:31.052 "name": "spare", 00:13:31.052 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:31.052 "is_configured": true, 00:13:31.052 "data_offset": 0, 00:13:31.052 "data_size": 65536 00:13:31.052 }, 00:13:31.052 { 00:13:31.052 "name": "BaseBdev2", 00:13:31.052 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:31.052 "is_configured": true, 00:13:31.052 "data_offset": 0, 00:13:31.052 "data_size": 65536 00:13:31.052 }, 00:13:31.052 { 00:13:31.052 "name": "BaseBdev3", 00:13:31.052 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:31.052 "is_configured": true, 00:13:31.052 "data_offset": 0, 00:13:31.052 "data_size": 65536 00:13:31.052 }, 00:13:31.052 { 00:13:31.052 "name": "BaseBdev4", 00:13:31.052 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:31.052 "is_configured": true, 00:13:31.052 "data_offset": 0, 00:13:31.052 "data_size": 65536 00:13:31.052 } 00:13:31.052 ] 00:13:31.052 }' 00:13:31.052 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.328 [2024-10-30 09:48:09.737528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.328 [2024-10-30 09:48:09.739859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:31.328 [2024-10-30 09:48:09.740005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.328 [2024-10-30 09:48:09.740021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.328 [2024-10-30 09:48:09.740030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.328 "name": "raid_bdev1", 00:13:31.328 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:31.328 "strip_size_kb": 64, 00:13:31.328 "state": "online", 00:13:31.328 "raid_level": "raid5f", 00:13:31.328 "superblock": false, 00:13:31.328 "num_base_bdevs": 4, 00:13:31.328 "num_base_bdevs_discovered": 3, 00:13:31.328 "num_base_bdevs_operational": 3, 00:13:31.328 "base_bdevs_list": [ 00:13:31.328 { 00:13:31.328 "name": null, 00:13:31.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.328 "is_configured": false, 00:13:31.328 "data_offset": 0, 00:13:31.328 "data_size": 65536 00:13:31.328 }, 00:13:31.328 { 00:13:31.328 "name": "BaseBdev2", 00:13:31.328 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:31.328 "is_configured": true, 00:13:31.328 "data_offset": 0, 00:13:31.328 "data_size": 65536 00:13:31.328 }, 00:13:31.328 { 00:13:31.328 "name": "BaseBdev3", 00:13:31.328 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:31.328 "is_configured": true, 00:13:31.328 "data_offset": 0, 00:13:31.328 "data_size": 65536 00:13:31.328 }, 00:13:31.328 { 00:13:31.328 "name": "BaseBdev4", 00:13:31.328 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:31.328 "is_configured": true, 00:13:31.328 "data_offset": 0, 00:13:31.328 "data_size": 65536 00:13:31.328 } 00:13:31.328 ] 00:13:31.328 }' 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.328 09:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.585 "name": "raid_bdev1", 00:13:31.585 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:31.585 "strip_size_kb": 64, 00:13:31.585 "state": "online", 00:13:31.585 "raid_level": "raid5f", 00:13:31.585 "superblock": false, 00:13:31.585 "num_base_bdevs": 4, 00:13:31.585 "num_base_bdevs_discovered": 3, 00:13:31.585 "num_base_bdevs_operational": 3, 00:13:31.585 "base_bdevs_list": [ 00:13:31.585 { 00:13:31.585 "name": null, 00:13:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.585 "is_configured": false, 00:13:31.585 "data_offset": 0, 00:13:31.585 "data_size": 65536 00:13:31.585 }, 00:13:31.585 { 00:13:31.585 "name": "BaseBdev2", 00:13:31.585 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:31.585 "is_configured": true, 00:13:31.585 "data_offset": 0, 00:13:31.585 "data_size": 65536 00:13:31.585 }, 00:13:31.585 { 00:13:31.585 "name": "BaseBdev3", 00:13:31.585 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:31.585 "is_configured": true, 00:13:31.585 "data_offset": 0, 00:13:31.585 "data_size": 65536 00:13:31.585 }, 00:13:31.585 { 00:13:31.585 "name": "BaseBdev4", 00:13:31.585 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:31.585 "is_configured": true, 00:13:31.585 "data_offset": 0, 00:13:31.585 "data_size": 65536 00:13:31.585 } 00:13:31.585 ] 00:13:31.585 }' 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.585 [2024-10-30 09:48:10.168112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.585 [2024-10-30 09:48:10.175860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.585 09:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:31.585 [2024-10-30 09:48:10.181117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.957 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.957 "name": "raid_bdev1", 00:13:32.957 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:32.957 "strip_size_kb": 64, 00:13:32.957 "state": "online", 00:13:32.957 "raid_level": "raid5f", 00:13:32.957 "superblock": false, 00:13:32.957 "num_base_bdevs": 4, 00:13:32.957 "num_base_bdevs_discovered": 4, 00:13:32.957 "num_base_bdevs_operational": 4, 00:13:32.957 "process": { 00:13:32.957 "type": "rebuild", 00:13:32.957 "target": "spare", 00:13:32.957 "progress": { 00:13:32.957 "blocks": 19200, 00:13:32.957 "percent": 9 00:13:32.957 } 00:13:32.957 }, 00:13:32.957 "base_bdevs_list": [ 00:13:32.957 { 00:13:32.957 "name": "spare", 00:13:32.957 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:32.957 "is_configured": true, 00:13:32.957 "data_offset": 0, 00:13:32.957 "data_size": 65536 00:13:32.957 }, 00:13:32.957 { 00:13:32.957 "name": "BaseBdev2", 00:13:32.957 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:32.957 "is_configured": true, 00:13:32.957 "data_offset": 0, 00:13:32.957 "data_size": 65536 00:13:32.957 }, 00:13:32.957 { 00:13:32.957 "name": "BaseBdev3", 00:13:32.957 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:32.957 "is_configured": true, 00:13:32.957 "data_offset": 0, 00:13:32.957 "data_size": 65536 00:13:32.957 }, 00:13:32.957 { 00:13:32.957 "name": "BaseBdev4", 00:13:32.957 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:32.957 "is_configured": true, 00:13:32.957 "data_offset": 0, 00:13:32.958 "data_size": 65536 00:13:32.958 } 00:13:32.958 ] 00:13:32.958 }' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.958 "name": "raid_bdev1", 00:13:32.958 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:32.958 "strip_size_kb": 64, 00:13:32.958 "state": "online", 00:13:32.958 "raid_level": "raid5f", 00:13:32.958 "superblock": false, 00:13:32.958 "num_base_bdevs": 4, 00:13:32.958 "num_base_bdevs_discovered": 4, 00:13:32.958 "num_base_bdevs_operational": 4, 00:13:32.958 "process": { 00:13:32.958 "type": "rebuild", 00:13:32.958 "target": "spare", 00:13:32.958 "progress": { 00:13:32.958 "blocks": 21120, 00:13:32.958 "percent": 10 00:13:32.958 } 00:13:32.958 }, 00:13:32.958 "base_bdevs_list": [ 00:13:32.958 { 00:13:32.958 "name": "spare", 00:13:32.958 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:32.958 "is_configured": true, 00:13:32.958 "data_offset": 0, 00:13:32.958 "data_size": 65536 00:13:32.958 }, 00:13:32.958 { 00:13:32.958 "name": "BaseBdev2", 00:13:32.958 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:32.958 "is_configured": true, 00:13:32.958 "data_offset": 0, 00:13:32.958 "data_size": 65536 00:13:32.958 }, 00:13:32.958 { 00:13:32.958 "name": "BaseBdev3", 00:13:32.958 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:32.958 "is_configured": true, 00:13:32.958 "data_offset": 0, 00:13:32.958 "data_size": 65536 00:13:32.958 }, 00:13:32.958 { 00:13:32.958 "name": "BaseBdev4", 00:13:32.958 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:32.958 "is_configured": true, 00:13:32.958 "data_offset": 0, 00:13:32.958 "data_size": 65536 00:13:32.958 } 00:13:32.958 ] 00:13:32.958 }' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.958 09:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.890 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.890 "name": "raid_bdev1", 00:13:33.890 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:33.890 "strip_size_kb": 64, 00:13:33.890 "state": "online", 00:13:33.890 "raid_level": "raid5f", 00:13:33.890 "superblock": false, 00:13:33.890 "num_base_bdevs": 4, 00:13:33.890 "num_base_bdevs_discovered": 4, 00:13:33.890 "num_base_bdevs_operational": 4, 00:13:33.890 "process": { 00:13:33.890 "type": "rebuild", 00:13:33.890 "target": "spare", 00:13:33.890 "progress": { 00:13:33.891 "blocks": 42240, 00:13:33.891 "percent": 21 00:13:33.891 } 00:13:33.891 }, 00:13:33.891 "base_bdevs_list": [ 00:13:33.891 { 00:13:33.891 "name": "spare", 00:13:33.891 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:33.891 "is_configured": true, 00:13:33.891 "data_offset": 0, 00:13:33.891 "data_size": 65536 00:13:33.891 }, 00:13:33.891 { 00:13:33.891 "name": "BaseBdev2", 00:13:33.891 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:33.891 "is_configured": true, 00:13:33.891 "data_offset": 0, 00:13:33.891 "data_size": 65536 00:13:33.891 }, 00:13:33.891 { 00:13:33.891 "name": "BaseBdev3", 00:13:33.891 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:33.891 "is_configured": true, 00:13:33.891 "data_offset": 0, 00:13:33.891 "data_size": 65536 00:13:33.891 }, 00:13:33.891 { 00:13:33.891 "name": "BaseBdev4", 00:13:33.891 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:33.891 "is_configured": true, 00:13:33.891 "data_offset": 0, 00:13:33.891 "data_size": 65536 00:13:33.891 } 00:13:33.891 ] 00:13:33.891 }' 00:13:33.891 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.891 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.891 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.891 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.891 09:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.263 "name": "raid_bdev1", 00:13:35.263 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:35.263 "strip_size_kb": 64, 00:13:35.263 "state": "online", 00:13:35.263 "raid_level": "raid5f", 00:13:35.263 "superblock": false, 00:13:35.263 "num_base_bdevs": 4, 00:13:35.263 "num_base_bdevs_discovered": 4, 00:13:35.263 "num_base_bdevs_operational": 4, 00:13:35.263 "process": { 00:13:35.263 "type": "rebuild", 00:13:35.263 "target": "spare", 00:13:35.263 "progress": { 00:13:35.263 "blocks": 61440, 00:13:35.263 "percent": 31 00:13:35.263 } 00:13:35.263 }, 00:13:35.263 "base_bdevs_list": [ 00:13:35.263 { 00:13:35.263 "name": "spare", 00:13:35.263 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:35.263 "is_configured": true, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 65536 00:13:35.263 }, 00:13:35.263 { 00:13:35.263 "name": "BaseBdev2", 00:13:35.263 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:35.263 "is_configured": true, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 65536 00:13:35.263 }, 00:13:35.263 { 00:13:35.263 "name": "BaseBdev3", 00:13:35.263 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:35.263 "is_configured": true, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 65536 00:13:35.263 }, 00:13:35.263 { 00:13:35.263 "name": "BaseBdev4", 00:13:35.263 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:35.263 "is_configured": true, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 65536 00:13:35.263 } 00:13:35.263 ] 00:13:35.263 }' 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.263 09:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.230 "name": "raid_bdev1", 00:13:36.230 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:36.230 "strip_size_kb": 64, 00:13:36.230 "state": "online", 00:13:36.230 "raid_level": "raid5f", 00:13:36.230 "superblock": false, 00:13:36.230 "num_base_bdevs": 4, 00:13:36.230 "num_base_bdevs_discovered": 4, 00:13:36.230 "num_base_bdevs_operational": 4, 00:13:36.230 "process": { 00:13:36.230 "type": "rebuild", 00:13:36.230 "target": "spare", 00:13:36.230 "progress": { 00:13:36.230 "blocks": 82560, 00:13:36.230 "percent": 41 00:13:36.230 } 00:13:36.230 }, 00:13:36.230 "base_bdevs_list": [ 00:13:36.230 { 00:13:36.230 "name": "spare", 00:13:36.230 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:36.230 "is_configured": true, 00:13:36.230 "data_offset": 0, 00:13:36.230 "data_size": 65536 00:13:36.230 }, 00:13:36.230 { 00:13:36.230 "name": "BaseBdev2", 00:13:36.230 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:36.230 "is_configured": true, 00:13:36.230 "data_offset": 0, 00:13:36.230 "data_size": 65536 00:13:36.230 }, 00:13:36.230 { 00:13:36.230 "name": "BaseBdev3", 00:13:36.230 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:36.230 "is_configured": true, 00:13:36.230 "data_offset": 0, 00:13:36.230 "data_size": 65536 00:13:36.230 }, 00:13:36.230 { 00:13:36.230 "name": "BaseBdev4", 00:13:36.230 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:36.230 "is_configured": true, 00:13:36.230 "data_offset": 0, 00:13:36.230 "data_size": 65536 00:13:36.230 } 00:13:36.230 ] 00:13:36.230 }' 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.230 09:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.164 "name": "raid_bdev1", 00:13:37.164 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:37.164 "strip_size_kb": 64, 00:13:37.164 "state": "online", 00:13:37.164 "raid_level": "raid5f", 00:13:37.164 "superblock": false, 00:13:37.164 "num_base_bdevs": 4, 00:13:37.164 "num_base_bdevs_discovered": 4, 00:13:37.164 "num_base_bdevs_operational": 4, 00:13:37.164 "process": { 00:13:37.164 "type": "rebuild", 00:13:37.164 "target": "spare", 00:13:37.164 "progress": { 00:13:37.164 "blocks": 103680, 00:13:37.164 "percent": 52 00:13:37.164 } 00:13:37.164 }, 00:13:37.164 "base_bdevs_list": [ 00:13:37.164 { 00:13:37.164 "name": "spare", 00:13:37.164 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:37.164 "is_configured": true, 00:13:37.164 "data_offset": 0, 00:13:37.164 "data_size": 65536 00:13:37.164 }, 00:13:37.164 { 00:13:37.164 "name": "BaseBdev2", 00:13:37.164 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:37.164 "is_configured": true, 00:13:37.164 "data_offset": 0, 00:13:37.164 "data_size": 65536 00:13:37.164 }, 00:13:37.164 { 00:13:37.164 "name": "BaseBdev3", 00:13:37.164 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:37.164 "is_configured": true, 00:13:37.164 "data_offset": 0, 00:13:37.164 "data_size": 65536 00:13:37.164 }, 00:13:37.164 { 00:13:37.164 "name": "BaseBdev4", 00:13:37.164 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:37.164 "is_configured": true, 00:13:37.164 "data_offset": 0, 00:13:37.164 "data_size": 65536 00:13:37.164 } 00:13:37.164 ] 00:13:37.164 }' 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.164 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.420 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.420 09:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.353 "name": "raid_bdev1", 00:13:38.353 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:38.353 "strip_size_kb": 64, 00:13:38.353 "state": "online", 00:13:38.353 "raid_level": "raid5f", 00:13:38.353 "superblock": false, 00:13:38.353 "num_base_bdevs": 4, 00:13:38.353 "num_base_bdevs_discovered": 4, 00:13:38.353 "num_base_bdevs_operational": 4, 00:13:38.353 "process": { 00:13:38.353 "type": "rebuild", 00:13:38.353 "target": "spare", 00:13:38.353 "progress": { 00:13:38.353 "blocks": 124800, 00:13:38.353 "percent": 63 00:13:38.353 } 00:13:38.353 }, 00:13:38.353 "base_bdevs_list": [ 00:13:38.353 { 00:13:38.353 "name": "spare", 00:13:38.353 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:38.353 "is_configured": true, 00:13:38.353 "data_offset": 0, 00:13:38.353 "data_size": 65536 00:13:38.353 }, 00:13:38.353 { 00:13:38.353 "name": "BaseBdev2", 00:13:38.353 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:38.353 "is_configured": true, 00:13:38.353 "data_offset": 0, 00:13:38.353 "data_size": 65536 00:13:38.353 }, 00:13:38.353 { 00:13:38.353 "name": "BaseBdev3", 00:13:38.353 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:38.353 "is_configured": true, 00:13:38.353 "data_offset": 0, 00:13:38.353 "data_size": 65536 00:13:38.353 }, 00:13:38.353 { 00:13:38.353 "name": "BaseBdev4", 00:13:38.353 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:38.353 "is_configured": true, 00:13:38.353 "data_offset": 0, 00:13:38.353 "data_size": 65536 00:13:38.353 } 00:13:38.353 ] 00:13:38.353 }' 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.353 09:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.725 "name": "raid_bdev1", 00:13:39.725 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:39.725 "strip_size_kb": 64, 00:13:39.725 "state": "online", 00:13:39.725 "raid_level": "raid5f", 00:13:39.725 "superblock": false, 00:13:39.725 "num_base_bdevs": 4, 00:13:39.725 "num_base_bdevs_discovered": 4, 00:13:39.725 "num_base_bdevs_operational": 4, 00:13:39.725 "process": { 00:13:39.725 "type": "rebuild", 00:13:39.725 "target": "spare", 00:13:39.725 "progress": { 00:13:39.725 "blocks": 145920, 00:13:39.725 "percent": 74 00:13:39.725 } 00:13:39.725 }, 00:13:39.725 "base_bdevs_list": [ 00:13:39.725 { 00:13:39.725 "name": "spare", 00:13:39.725 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:39.725 "is_configured": true, 00:13:39.725 "data_offset": 0, 00:13:39.725 "data_size": 65536 00:13:39.725 }, 00:13:39.725 { 00:13:39.725 "name": "BaseBdev2", 00:13:39.725 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:39.725 "is_configured": true, 00:13:39.725 "data_offset": 0, 00:13:39.725 "data_size": 65536 00:13:39.725 }, 00:13:39.725 { 00:13:39.725 "name": "BaseBdev3", 00:13:39.725 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:39.725 "is_configured": true, 00:13:39.725 "data_offset": 0, 00:13:39.725 "data_size": 65536 00:13:39.725 }, 00:13:39.725 { 00:13:39.725 "name": "BaseBdev4", 00:13:39.725 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:39.725 "is_configured": true, 00:13:39.725 "data_offset": 0, 00:13:39.725 "data_size": 65536 00:13:39.725 } 00:13:39.725 ] 00:13:39.725 }' 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.725 09:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.725 09:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.725 09:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.661 "name": "raid_bdev1", 00:13:40.661 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:40.661 "strip_size_kb": 64, 00:13:40.661 "state": "online", 00:13:40.661 "raid_level": "raid5f", 00:13:40.661 "superblock": false, 00:13:40.661 "num_base_bdevs": 4, 00:13:40.661 "num_base_bdevs_discovered": 4, 00:13:40.661 "num_base_bdevs_operational": 4, 00:13:40.661 "process": { 00:13:40.661 "type": "rebuild", 00:13:40.661 "target": "spare", 00:13:40.661 "progress": { 00:13:40.661 "blocks": 167040, 00:13:40.661 "percent": 84 00:13:40.661 } 00:13:40.661 }, 00:13:40.661 "base_bdevs_list": [ 00:13:40.661 { 00:13:40.661 "name": "spare", 00:13:40.661 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:40.661 "is_configured": true, 00:13:40.661 "data_offset": 0, 00:13:40.661 "data_size": 65536 00:13:40.661 }, 00:13:40.661 { 00:13:40.661 "name": "BaseBdev2", 00:13:40.661 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:40.661 "is_configured": true, 00:13:40.661 "data_offset": 0, 00:13:40.661 "data_size": 65536 00:13:40.661 }, 00:13:40.661 { 00:13:40.661 "name": "BaseBdev3", 00:13:40.661 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:40.661 "is_configured": true, 00:13:40.661 "data_offset": 0, 00:13:40.661 "data_size": 65536 00:13:40.661 }, 00:13:40.661 { 00:13:40.661 "name": "BaseBdev4", 00:13:40.661 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:40.661 "is_configured": true, 00:13:40.661 "data_offset": 0, 00:13:40.661 "data_size": 65536 00:13:40.661 } 00:13:40.661 ] 00:13:40.661 }' 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.661 09:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.598 "name": "raid_bdev1", 00:13:41.598 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:41.598 "strip_size_kb": 64, 00:13:41.598 "state": "online", 00:13:41.598 "raid_level": "raid5f", 00:13:41.598 "superblock": false, 00:13:41.598 "num_base_bdevs": 4, 00:13:41.598 "num_base_bdevs_discovered": 4, 00:13:41.598 "num_base_bdevs_operational": 4, 00:13:41.598 "process": { 00:13:41.598 "type": "rebuild", 00:13:41.598 "target": "spare", 00:13:41.598 "progress": { 00:13:41.598 "blocks": 188160, 00:13:41.598 "percent": 95 00:13:41.598 } 00:13:41.598 }, 00:13:41.598 "base_bdevs_list": [ 00:13:41.598 { 00:13:41.598 "name": "spare", 00:13:41.598 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 0, 00:13:41.598 "data_size": 65536 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": "BaseBdev2", 00:13:41.598 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 0, 00:13:41.598 "data_size": 65536 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": "BaseBdev3", 00:13:41.598 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 0, 00:13:41.598 "data_size": 65536 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": "BaseBdev4", 00:13:41.598 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 0, 00:13:41.598 "data_size": 65536 00:13:41.598 } 00:13:41.598 ] 00:13:41.598 }' 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.598 09:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.176 [2024-10-30 09:48:20.544609] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.176 [2024-10-30 09:48:20.544683] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.176 [2024-10-30 09:48:20.544726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.743 "name": "raid_bdev1", 00:13:42.743 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:42.743 "strip_size_kb": 64, 00:13:42.743 "state": "online", 00:13:42.743 "raid_level": "raid5f", 00:13:42.743 "superblock": false, 00:13:42.743 "num_base_bdevs": 4, 00:13:42.743 "num_base_bdevs_discovered": 4, 00:13:42.743 "num_base_bdevs_operational": 4, 00:13:42.743 "base_bdevs_list": [ 00:13:42.743 { 00:13:42.743 "name": "spare", 00:13:42.743 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev2", 00:13:42.743 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev3", 00:13:42.743 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev4", 00:13:42.743 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 } 00:13:42.743 ] 00:13:42.743 }' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.743 "name": "raid_bdev1", 00:13:42.743 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:42.743 "strip_size_kb": 64, 00:13:42.743 "state": "online", 00:13:42.743 "raid_level": "raid5f", 00:13:42.743 "superblock": false, 00:13:42.743 "num_base_bdevs": 4, 00:13:42.743 "num_base_bdevs_discovered": 4, 00:13:42.743 "num_base_bdevs_operational": 4, 00:13:42.743 "base_bdevs_list": [ 00:13:42.743 { 00:13:42.743 "name": "spare", 00:13:42.743 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev2", 00:13:42.743 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev3", 00:13:42.743 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 }, 00:13:42.743 { 00:13:42.743 "name": "BaseBdev4", 00:13:42.743 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:42.743 "is_configured": true, 00:13:42.743 "data_offset": 0, 00:13:42.743 "data_size": 65536 00:13:42.743 } 00:13:42.743 ] 00:13:42.743 }' 00:13:42.743 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.002 "name": "raid_bdev1", 00:13:43.002 "uuid": "598faa48-4e4d-435b-8784-eb58561dffa7", 00:13:43.002 "strip_size_kb": 64, 00:13:43.002 "state": "online", 00:13:43.002 "raid_level": "raid5f", 00:13:43.002 "superblock": false, 00:13:43.002 "num_base_bdevs": 4, 00:13:43.002 "num_base_bdevs_discovered": 4, 00:13:43.002 "num_base_bdevs_operational": 4, 00:13:43.002 "base_bdevs_list": [ 00:13:43.002 { 00:13:43.002 "name": "spare", 00:13:43.002 "uuid": "8a82c150-9730-58f1-970f-020524a0bcc2", 00:13:43.002 "is_configured": true, 00:13:43.002 "data_offset": 0, 00:13:43.002 "data_size": 65536 00:13:43.002 }, 00:13:43.002 { 00:13:43.002 "name": "BaseBdev2", 00:13:43.002 "uuid": "0e26d78d-9492-5816-b9ff-b1fb05a0dbdf", 00:13:43.002 "is_configured": true, 00:13:43.002 "data_offset": 0, 00:13:43.002 "data_size": 65536 00:13:43.002 }, 00:13:43.002 { 00:13:43.002 "name": "BaseBdev3", 00:13:43.002 "uuid": "f6041394-ef64-59a6-af1a-c6c5b636cecb", 00:13:43.002 "is_configured": true, 00:13:43.002 "data_offset": 0, 00:13:43.002 "data_size": 65536 00:13:43.002 }, 00:13:43.002 { 00:13:43.002 "name": "BaseBdev4", 00:13:43.002 "uuid": "eee3faba-d215-52a0-a72f-cd15c20101bd", 00:13:43.002 "is_configured": true, 00:13:43.002 "data_offset": 0, 00:13:43.002 "data_size": 65536 00:13:43.002 } 00:13:43.002 ] 00:13:43.002 }' 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.002 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.260 [2024-10-30 09:48:21.729408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.260 [2024-10-30 09:48:21.729438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.260 [2024-10-30 09:48:21.729506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.260 [2024-10-30 09:48:21.729583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.260 [2024-10-30 09:48:21.729596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.260 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:43.519 /dev/nbd0 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.519 1+0 records in 00:13:43.519 1+0 records out 00:13:43.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318106 s, 12.9 MB/s 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:43.519 09:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.519 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:43.519 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:43.519 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.519 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.519 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.777 /dev/nbd1 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.777 1+0 records in 00:13:43.777 1+0 records out 00:13:43.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292594 s, 14.0 MB/s 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.777 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.034 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82217 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82217 ']' 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82217 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82217 00:13:44.292 killing process with pid 82217 00:13:44.292 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.292 00:13:44.292 Latency(us) 00:13:44.292 [2024-10-30T09:48:22.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.292 [2024-10-30T09:48:22.912Z] =================================================================================================================== 00:13:44.292 [2024-10-30T09:48:22.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82217' 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82217 00:13:44.292 [2024-10-30 09:48:22.828995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.292 09:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82217 00:13:44.550 [2024-10-30 09:48:23.066905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.116 09:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.117 00:13:45.117 real 0m17.754s 00:13:45.117 user 0m20.918s 00:13:45.117 sys 0m1.665s 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 ************************************ 00:13:45.117 END TEST raid5f_rebuild_test 00:13:45.117 ************************************ 00:13:45.117 09:48:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:13:45.117 09:48:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:45.117 09:48:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.117 09:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 ************************************ 00:13:45.117 START TEST raid5f_rebuild_test_sb 00:13:45.117 ************************************ 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82716 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82716 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82716 ']' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.117 09:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:45.117 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:45.117 Zero copy mechanism will not be used. 00:13:45.117 [2024-10-30 09:48:23.727338] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:13:45.117 [2024-10-30 09:48:23.727459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82716 ] 00:13:45.376 [2024-10-30 09:48:23.882897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.376 [2024-10-30 09:48:23.965266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.635 [2024-10-30 09:48:24.073920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.635 [2024-10-30 09:48:24.073949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.201 BaseBdev1_malloc 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.201 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.201 [2024-10-30 09:48:24.593109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.201 [2024-10-30 09:48:24.593260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.202 [2024-10-30 09:48:24.593282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:46.202 [2024-10-30 09:48:24.593292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.202 [2024-10-30 09:48:24.594955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.202 [2024-10-30 09:48:24.594988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.202 BaseBdev1 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 BaseBdev2_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 [2024-10-30 09:48:24.623807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:46.202 [2024-10-30 09:48:24.623849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.202 [2024-10-30 09:48:24.623862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:46.202 [2024-10-30 09:48:24.623872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.202 [2024-10-30 09:48:24.625569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.202 [2024-10-30 09:48:24.625599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.202 BaseBdev2 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 BaseBdev3_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 [2024-10-30 09:48:24.668396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:46.202 [2024-10-30 09:48:24.668438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.202 [2024-10-30 09:48:24.668453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:46.202 [2024-10-30 09:48:24.668461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.202 [2024-10-30 09:48:24.670100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.202 [2024-10-30 09:48:24.670128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.202 BaseBdev3 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 BaseBdev4_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 [2024-10-30 09:48:24.699117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:46.202 [2024-10-30 09:48:24.699245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.202 [2024-10-30 09:48:24.699262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:46.202 [2024-10-30 09:48:24.699270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.202 [2024-10-30 09:48:24.700898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.202 [2024-10-30 09:48:24.700933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.202 BaseBdev4 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 spare_malloc 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 spare_delay 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 [2024-10-30 09:48:24.741931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:46.202 [2024-10-30 09:48:24.741972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.202 [2024-10-30 09:48:24.741984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:46.202 [2024-10-30 09:48:24.741992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.202 [2024-10-30 09:48:24.743642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.202 [2024-10-30 09:48:24.743670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:46.202 spare 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 [2024-10-30 09:48:24.749990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.202 [2024-10-30 09:48:24.751468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.202 [2024-10-30 09:48:24.751515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.202 [2024-10-30 09:48:24.751554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.202 [2024-10-30 09:48:24.751691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.202 [2024-10-30 09:48:24.751703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:46.202 [2024-10-30 09:48:24.751892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:46.202 [2024-10-30 09:48:24.755701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.202 [2024-10-30 09:48:24.755715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:46.202 [2024-10-30 09:48:24.755852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.202 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.202 "name": "raid_bdev1", 00:13:46.202 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:46.202 "strip_size_kb": 64, 00:13:46.202 "state": "online", 00:13:46.202 "raid_level": "raid5f", 00:13:46.202 "superblock": true, 00:13:46.202 "num_base_bdevs": 4, 00:13:46.202 "num_base_bdevs_discovered": 4, 00:13:46.202 "num_base_bdevs_operational": 4, 00:13:46.202 "base_bdevs_list": [ 00:13:46.202 { 00:13:46.202 "name": "BaseBdev1", 00:13:46.202 "uuid": "2d0f22f4-8325-5121-9b91-1885ace4a24c", 00:13:46.202 "is_configured": true, 00:13:46.202 "data_offset": 2048, 00:13:46.202 "data_size": 63488 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "name": "BaseBdev2", 00:13:46.202 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:46.202 "is_configured": true, 00:13:46.202 "data_offset": 2048, 00:13:46.202 "data_size": 63488 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "name": "BaseBdev3", 00:13:46.202 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:46.202 "is_configured": true, 00:13:46.202 "data_offset": 2048, 00:13:46.202 "data_size": 63488 00:13:46.203 }, 00:13:46.203 { 00:13:46.203 "name": "BaseBdev4", 00:13:46.203 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:46.203 "is_configured": true, 00:13:46.203 "data_offset": 2048, 00:13:46.203 "data_size": 63488 00:13:46.203 } 00:13:46.203 ] 00:13:46.203 }' 00:13:46.203 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.203 09:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.460 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.460 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.460 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:46.460 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.718 [2024-10-30 09:48:25.084249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.718 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:46.718 [2024-10-30 09:48:25.320169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:46.976 /dev/nbd0 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.976 1+0 records in 00:13:46.976 1+0 records out 00:13:46.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258346 s, 15.9 MB/s 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:13:46.976 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:13:47.235 496+0 records in 00:13:47.235 496+0 records out 00:13:47.235 97517568 bytes (98 MB, 93 MiB) copied, 0.455901 s, 214 MB/s 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.235 09:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:47.494 [2024-10-30 09:48:26.025298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.494 [2024-10-30 09:48:26.053624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.494 "name": "raid_bdev1", 00:13:47.494 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:47.494 "strip_size_kb": 64, 00:13:47.494 "state": "online", 00:13:47.494 "raid_level": "raid5f", 00:13:47.494 "superblock": true, 00:13:47.494 "num_base_bdevs": 4, 00:13:47.494 "num_base_bdevs_discovered": 3, 00:13:47.494 "num_base_bdevs_operational": 3, 00:13:47.494 "base_bdevs_list": [ 00:13:47.494 { 00:13:47.494 "name": null, 00:13:47.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.494 "is_configured": false, 00:13:47.494 "data_offset": 0, 00:13:47.494 "data_size": 63488 00:13:47.494 }, 00:13:47.494 { 00:13:47.494 "name": "BaseBdev2", 00:13:47.494 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:47.494 "is_configured": true, 00:13:47.494 "data_offset": 2048, 00:13:47.494 "data_size": 63488 00:13:47.494 }, 00:13:47.494 { 00:13:47.494 "name": "BaseBdev3", 00:13:47.494 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:47.494 "is_configured": true, 00:13:47.494 "data_offset": 2048, 00:13:47.494 "data_size": 63488 00:13:47.494 }, 00:13:47.494 { 00:13:47.494 "name": "BaseBdev4", 00:13:47.494 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:47.494 "is_configured": true, 00:13:47.494 "data_offset": 2048, 00:13:47.494 "data_size": 63488 00:13:47.494 } 00:13:47.494 ] 00:13:47.494 }' 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.494 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.751 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.751 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.751 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.751 [2024-10-30 09:48:26.341671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.751 [2024-10-30 09:48:26.349794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:13:47.751 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.751 09:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:47.751 [2024-10-30 09:48:26.355255] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.123 "name": "raid_bdev1", 00:13:49.123 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:49.123 "strip_size_kb": 64, 00:13:49.123 "state": "online", 00:13:49.123 "raid_level": "raid5f", 00:13:49.123 "superblock": true, 00:13:49.123 "num_base_bdevs": 4, 00:13:49.123 "num_base_bdevs_discovered": 4, 00:13:49.123 "num_base_bdevs_operational": 4, 00:13:49.123 "process": { 00:13:49.123 "type": "rebuild", 00:13:49.123 "target": "spare", 00:13:49.123 "progress": { 00:13:49.123 "blocks": 19200, 00:13:49.123 "percent": 10 00:13:49.123 } 00:13:49.123 }, 00:13:49.123 "base_bdevs_list": [ 00:13:49.123 { 00:13:49.123 "name": "spare", 00:13:49.123 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:49.123 "is_configured": true, 00:13:49.123 "data_offset": 2048, 00:13:49.123 "data_size": 63488 00:13:49.123 }, 00:13:49.123 { 00:13:49.123 "name": "BaseBdev2", 00:13:49.123 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:49.123 "is_configured": true, 00:13:49.123 "data_offset": 2048, 00:13:49.123 "data_size": 63488 00:13:49.123 }, 00:13:49.123 { 00:13:49.123 "name": "BaseBdev3", 00:13:49.123 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:49.123 "is_configured": true, 00:13:49.123 "data_offset": 2048, 00:13:49.123 "data_size": 63488 00:13:49.123 }, 00:13:49.123 { 00:13:49.123 "name": "BaseBdev4", 00:13:49.123 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:49.123 "is_configured": true, 00:13:49.123 "data_offset": 2048, 00:13:49.123 "data_size": 63488 00:13:49.123 } 00:13:49.123 ] 00:13:49.123 }' 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.123 [2024-10-30 09:48:27.464282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.123 [2024-10-30 09:48:27.562695] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.123 [2024-10-30 09:48:27.562765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.123 [2024-10-30 09:48:27.562780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.123 [2024-10-30 09:48:27.562788] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.123 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.123 "name": "raid_bdev1", 00:13:49.123 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:49.124 "strip_size_kb": 64, 00:13:49.124 "state": "online", 00:13:49.124 "raid_level": "raid5f", 00:13:49.124 "superblock": true, 00:13:49.124 "num_base_bdevs": 4, 00:13:49.124 "num_base_bdevs_discovered": 3, 00:13:49.124 "num_base_bdevs_operational": 3, 00:13:49.124 "base_bdevs_list": [ 00:13:49.124 { 00:13:49.124 "name": null, 00:13:49.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.124 "is_configured": false, 00:13:49.124 "data_offset": 0, 00:13:49.124 "data_size": 63488 00:13:49.124 }, 00:13:49.124 { 00:13:49.124 "name": "BaseBdev2", 00:13:49.124 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:49.124 "is_configured": true, 00:13:49.124 "data_offset": 2048, 00:13:49.124 "data_size": 63488 00:13:49.124 }, 00:13:49.124 { 00:13:49.124 "name": "BaseBdev3", 00:13:49.124 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:49.124 "is_configured": true, 00:13:49.124 "data_offset": 2048, 00:13:49.124 "data_size": 63488 00:13:49.124 }, 00:13:49.124 { 00:13:49.124 "name": "BaseBdev4", 00:13:49.124 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:49.124 "is_configured": true, 00:13:49.124 "data_offset": 2048, 00:13:49.124 "data_size": 63488 00:13:49.124 } 00:13:49.124 ] 00:13:49.124 }' 00:13:49.124 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.124 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.382 "name": "raid_bdev1", 00:13:49.382 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:49.382 "strip_size_kb": 64, 00:13:49.382 "state": "online", 00:13:49.382 "raid_level": "raid5f", 00:13:49.382 "superblock": true, 00:13:49.382 "num_base_bdevs": 4, 00:13:49.382 "num_base_bdevs_discovered": 3, 00:13:49.382 "num_base_bdevs_operational": 3, 00:13:49.382 "base_bdevs_list": [ 00:13:49.382 { 00:13:49.382 "name": null, 00:13:49.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.382 "is_configured": false, 00:13:49.382 "data_offset": 0, 00:13:49.382 "data_size": 63488 00:13:49.382 }, 00:13:49.382 { 00:13:49.382 "name": "BaseBdev2", 00:13:49.382 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:49.382 "is_configured": true, 00:13:49.382 "data_offset": 2048, 00:13:49.382 "data_size": 63488 00:13:49.382 }, 00:13:49.382 { 00:13:49.382 "name": "BaseBdev3", 00:13:49.382 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:49.382 "is_configured": true, 00:13:49.382 "data_offset": 2048, 00:13:49.382 "data_size": 63488 00:13:49.382 }, 00:13:49.382 { 00:13:49.382 "name": "BaseBdev4", 00:13:49.382 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:49.382 "is_configured": true, 00:13:49.382 "data_offset": 2048, 00:13:49.382 "data_size": 63488 00:13:49.382 } 00:13:49.382 ] 00:13:49.382 }' 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.382 09:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.382 [2024-10-30 09:48:27.994951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.640 [2024-10-30 09:48:28.002610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:13:49.640 09:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.640 09:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:49.640 [2024-10-30 09:48:28.007822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.576 "name": "raid_bdev1", 00:13:50.576 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:50.576 "strip_size_kb": 64, 00:13:50.576 "state": "online", 00:13:50.576 "raid_level": "raid5f", 00:13:50.576 "superblock": true, 00:13:50.576 "num_base_bdevs": 4, 00:13:50.576 "num_base_bdevs_discovered": 4, 00:13:50.576 "num_base_bdevs_operational": 4, 00:13:50.576 "process": { 00:13:50.576 "type": "rebuild", 00:13:50.576 "target": "spare", 00:13:50.576 "progress": { 00:13:50.576 "blocks": 19200, 00:13:50.576 "percent": 10 00:13:50.576 } 00:13:50.576 }, 00:13:50.576 "base_bdevs_list": [ 00:13:50.576 { 00:13:50.576 "name": "spare", 00:13:50.576 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev2", 00:13:50.576 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev3", 00:13:50.576 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev4", 00:13:50.576 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 } 00:13:50.576 ] 00:13:50.576 }' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:50.576 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=504 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.576 "name": "raid_bdev1", 00:13:50.576 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:50.576 "strip_size_kb": 64, 00:13:50.576 "state": "online", 00:13:50.576 "raid_level": "raid5f", 00:13:50.576 "superblock": true, 00:13:50.576 "num_base_bdevs": 4, 00:13:50.576 "num_base_bdevs_discovered": 4, 00:13:50.576 "num_base_bdevs_operational": 4, 00:13:50.576 "process": { 00:13:50.576 "type": "rebuild", 00:13:50.576 "target": "spare", 00:13:50.576 "progress": { 00:13:50.576 "blocks": 19200, 00:13:50.576 "percent": 10 00:13:50.576 } 00:13:50.576 }, 00:13:50.576 "base_bdevs_list": [ 00:13:50.576 { 00:13:50.576 "name": "spare", 00:13:50.576 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev2", 00:13:50.576 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev3", 00:13:50.576 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 }, 00:13:50.576 { 00:13:50.576 "name": "BaseBdev4", 00:13:50.576 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:50.576 "is_configured": true, 00:13:50.576 "data_offset": 2048, 00:13:50.576 "data_size": 63488 00:13:50.576 } 00:13:50.576 ] 00:13:50.576 }' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.576 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.577 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.577 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.577 09:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.950 "name": "raid_bdev1", 00:13:51.950 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:51.950 "strip_size_kb": 64, 00:13:51.950 "state": "online", 00:13:51.950 "raid_level": "raid5f", 00:13:51.950 "superblock": true, 00:13:51.950 "num_base_bdevs": 4, 00:13:51.950 "num_base_bdevs_discovered": 4, 00:13:51.950 "num_base_bdevs_operational": 4, 00:13:51.950 "process": { 00:13:51.950 "type": "rebuild", 00:13:51.950 "target": "spare", 00:13:51.950 "progress": { 00:13:51.950 "blocks": 40320, 00:13:51.950 "percent": 21 00:13:51.950 } 00:13:51.950 }, 00:13:51.950 "base_bdevs_list": [ 00:13:51.950 { 00:13:51.950 "name": "spare", 00:13:51.950 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:51.950 "is_configured": true, 00:13:51.950 "data_offset": 2048, 00:13:51.950 "data_size": 63488 00:13:51.950 }, 00:13:51.950 { 00:13:51.950 "name": "BaseBdev2", 00:13:51.950 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:51.950 "is_configured": true, 00:13:51.950 "data_offset": 2048, 00:13:51.950 "data_size": 63488 00:13:51.950 }, 00:13:51.950 { 00:13:51.950 "name": "BaseBdev3", 00:13:51.950 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:51.950 "is_configured": true, 00:13:51.950 "data_offset": 2048, 00:13:51.950 "data_size": 63488 00:13:51.950 }, 00:13:51.950 { 00:13:51.950 "name": "BaseBdev4", 00:13:51.950 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:51.950 "is_configured": true, 00:13:51.950 "data_offset": 2048, 00:13:51.950 "data_size": 63488 00:13:51.950 } 00:13:51.950 ] 00:13:51.950 }' 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.950 09:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.883 "name": "raid_bdev1", 00:13:52.883 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:52.883 "strip_size_kb": 64, 00:13:52.883 "state": "online", 00:13:52.883 "raid_level": "raid5f", 00:13:52.883 "superblock": true, 00:13:52.883 "num_base_bdevs": 4, 00:13:52.883 "num_base_bdevs_discovered": 4, 00:13:52.883 "num_base_bdevs_operational": 4, 00:13:52.883 "process": { 00:13:52.883 "type": "rebuild", 00:13:52.883 "target": "spare", 00:13:52.883 "progress": { 00:13:52.883 "blocks": 61440, 00:13:52.883 "percent": 32 00:13:52.883 } 00:13:52.883 }, 00:13:52.883 "base_bdevs_list": [ 00:13:52.883 { 00:13:52.883 "name": "spare", 00:13:52.883 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 2048, 00:13:52.883 "data_size": 63488 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev2", 00:13:52.883 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 2048, 00:13:52.883 "data_size": 63488 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev3", 00:13:52.883 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 2048, 00:13:52.883 "data_size": 63488 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev4", 00:13:52.883 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 2048, 00:13:52.883 "data_size": 63488 00:13:52.883 } 00:13:52.883 ] 00:13:52.883 }' 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.883 09:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.848 "name": "raid_bdev1", 00:13:53.848 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:53.848 "strip_size_kb": 64, 00:13:53.848 "state": "online", 00:13:53.848 "raid_level": "raid5f", 00:13:53.848 "superblock": true, 00:13:53.848 "num_base_bdevs": 4, 00:13:53.848 "num_base_bdevs_discovered": 4, 00:13:53.848 "num_base_bdevs_operational": 4, 00:13:53.848 "process": { 00:13:53.848 "type": "rebuild", 00:13:53.848 "target": "spare", 00:13:53.848 "progress": { 00:13:53.848 "blocks": 82560, 00:13:53.848 "percent": 43 00:13:53.848 } 00:13:53.848 }, 00:13:53.848 "base_bdevs_list": [ 00:13:53.848 { 00:13:53.848 "name": "spare", 00:13:53.848 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:53.848 "is_configured": true, 00:13:53.848 "data_offset": 2048, 00:13:53.848 "data_size": 63488 00:13:53.848 }, 00:13:53.848 { 00:13:53.848 "name": "BaseBdev2", 00:13:53.848 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:53.848 "is_configured": true, 00:13:53.848 "data_offset": 2048, 00:13:53.848 "data_size": 63488 00:13:53.848 }, 00:13:53.848 { 00:13:53.848 "name": "BaseBdev3", 00:13:53.848 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:53.848 "is_configured": true, 00:13:53.848 "data_offset": 2048, 00:13:53.848 "data_size": 63488 00:13:53.848 }, 00:13:53.848 { 00:13:53.848 "name": "BaseBdev4", 00:13:53.848 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:53.848 "is_configured": true, 00:13:53.848 "data_offset": 2048, 00:13:53.848 "data_size": 63488 00:13:53.848 } 00:13:53.848 ] 00:13:53.848 }' 00:13:53.848 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.104 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.104 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.105 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.105 09:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.075 "name": "raid_bdev1", 00:13:55.075 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:55.075 "strip_size_kb": 64, 00:13:55.075 "state": "online", 00:13:55.075 "raid_level": "raid5f", 00:13:55.075 "superblock": true, 00:13:55.075 "num_base_bdevs": 4, 00:13:55.075 "num_base_bdevs_discovered": 4, 00:13:55.075 "num_base_bdevs_operational": 4, 00:13:55.075 "process": { 00:13:55.075 "type": "rebuild", 00:13:55.075 "target": "spare", 00:13:55.075 "progress": { 00:13:55.075 "blocks": 103680, 00:13:55.075 "percent": 54 00:13:55.075 } 00:13:55.075 }, 00:13:55.075 "base_bdevs_list": [ 00:13:55.075 { 00:13:55.075 "name": "spare", 00:13:55.075 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:55.075 "is_configured": true, 00:13:55.075 "data_offset": 2048, 00:13:55.075 "data_size": 63488 00:13:55.075 }, 00:13:55.075 { 00:13:55.075 "name": "BaseBdev2", 00:13:55.075 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:55.075 "is_configured": true, 00:13:55.075 "data_offset": 2048, 00:13:55.075 "data_size": 63488 00:13:55.075 }, 00:13:55.075 { 00:13:55.075 "name": "BaseBdev3", 00:13:55.075 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:55.075 "is_configured": true, 00:13:55.075 "data_offset": 2048, 00:13:55.075 "data_size": 63488 00:13:55.075 }, 00:13:55.075 { 00:13:55.075 "name": "BaseBdev4", 00:13:55.075 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:55.075 "is_configured": true, 00:13:55.075 "data_offset": 2048, 00:13:55.075 "data_size": 63488 00:13:55.075 } 00:13:55.075 ] 00:13:55.075 }' 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.075 09:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.006 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.007 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.007 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.264 "name": "raid_bdev1", 00:13:56.264 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:56.264 "strip_size_kb": 64, 00:13:56.264 "state": "online", 00:13:56.264 "raid_level": "raid5f", 00:13:56.264 "superblock": true, 00:13:56.264 "num_base_bdevs": 4, 00:13:56.264 "num_base_bdevs_discovered": 4, 00:13:56.264 "num_base_bdevs_operational": 4, 00:13:56.264 "process": { 00:13:56.264 "type": "rebuild", 00:13:56.264 "target": "spare", 00:13:56.264 "progress": { 00:13:56.264 "blocks": 124800, 00:13:56.264 "percent": 65 00:13:56.264 } 00:13:56.264 }, 00:13:56.264 "base_bdevs_list": [ 00:13:56.264 { 00:13:56.264 "name": "spare", 00:13:56.264 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:56.264 "is_configured": true, 00:13:56.264 "data_offset": 2048, 00:13:56.264 "data_size": 63488 00:13:56.264 }, 00:13:56.264 { 00:13:56.264 "name": "BaseBdev2", 00:13:56.264 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:56.264 "is_configured": true, 00:13:56.264 "data_offset": 2048, 00:13:56.264 "data_size": 63488 00:13:56.264 }, 00:13:56.264 { 00:13:56.264 "name": "BaseBdev3", 00:13:56.264 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:56.264 "is_configured": true, 00:13:56.264 "data_offset": 2048, 00:13:56.264 "data_size": 63488 00:13:56.264 }, 00:13:56.264 { 00:13:56.264 "name": "BaseBdev4", 00:13:56.264 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:56.264 "is_configured": true, 00:13:56.264 "data_offset": 2048, 00:13:56.264 "data_size": 63488 00:13:56.264 } 00:13:56.264 ] 00:13:56.264 }' 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.264 09:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.275 "name": "raid_bdev1", 00:13:57.275 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:57.275 "strip_size_kb": 64, 00:13:57.275 "state": "online", 00:13:57.275 "raid_level": "raid5f", 00:13:57.275 "superblock": true, 00:13:57.275 "num_base_bdevs": 4, 00:13:57.275 "num_base_bdevs_discovered": 4, 00:13:57.275 "num_base_bdevs_operational": 4, 00:13:57.275 "process": { 00:13:57.275 "type": "rebuild", 00:13:57.275 "target": "spare", 00:13:57.275 "progress": { 00:13:57.275 "blocks": 145920, 00:13:57.275 "percent": 76 00:13:57.275 } 00:13:57.275 }, 00:13:57.275 "base_bdevs_list": [ 00:13:57.275 { 00:13:57.275 "name": "spare", 00:13:57.275 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:57.275 "is_configured": true, 00:13:57.275 "data_offset": 2048, 00:13:57.275 "data_size": 63488 00:13:57.275 }, 00:13:57.275 { 00:13:57.275 "name": "BaseBdev2", 00:13:57.275 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:57.275 "is_configured": true, 00:13:57.275 "data_offset": 2048, 00:13:57.275 "data_size": 63488 00:13:57.275 }, 00:13:57.275 { 00:13:57.275 "name": "BaseBdev3", 00:13:57.275 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:57.275 "is_configured": true, 00:13:57.275 "data_offset": 2048, 00:13:57.275 "data_size": 63488 00:13:57.275 }, 00:13:57.275 { 00:13:57.275 "name": "BaseBdev4", 00:13:57.275 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:57.275 "is_configured": true, 00:13:57.275 "data_offset": 2048, 00:13:57.275 "data_size": 63488 00:13:57.275 } 00:13:57.275 ] 00:13:57.275 }' 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.275 09:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.209 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.468 "name": "raid_bdev1", 00:13:58.468 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:58.468 "strip_size_kb": 64, 00:13:58.468 "state": "online", 00:13:58.468 "raid_level": "raid5f", 00:13:58.468 "superblock": true, 00:13:58.468 "num_base_bdevs": 4, 00:13:58.468 "num_base_bdevs_discovered": 4, 00:13:58.468 "num_base_bdevs_operational": 4, 00:13:58.468 "process": { 00:13:58.468 "type": "rebuild", 00:13:58.468 "target": "spare", 00:13:58.468 "progress": { 00:13:58.468 "blocks": 167040, 00:13:58.468 "percent": 87 00:13:58.468 } 00:13:58.468 }, 00:13:58.468 "base_bdevs_list": [ 00:13:58.468 { 00:13:58.468 "name": "spare", 00:13:58.468 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:58.468 "is_configured": true, 00:13:58.468 "data_offset": 2048, 00:13:58.468 "data_size": 63488 00:13:58.468 }, 00:13:58.468 { 00:13:58.468 "name": "BaseBdev2", 00:13:58.468 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:58.468 "is_configured": true, 00:13:58.468 "data_offset": 2048, 00:13:58.468 "data_size": 63488 00:13:58.468 }, 00:13:58.468 { 00:13:58.468 "name": "BaseBdev3", 00:13:58.468 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:58.468 "is_configured": true, 00:13:58.468 "data_offset": 2048, 00:13:58.468 "data_size": 63488 00:13:58.468 }, 00:13:58.468 { 00:13:58.468 "name": "BaseBdev4", 00:13:58.468 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:58.468 "is_configured": true, 00:13:58.468 "data_offset": 2048, 00:13:58.468 "data_size": 63488 00:13:58.468 } 00:13:58.468 ] 00:13:58.468 }' 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.468 09:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.400 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.401 "name": "raid_bdev1", 00:13:59.401 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:13:59.401 "strip_size_kb": 64, 00:13:59.401 "state": "online", 00:13:59.401 "raid_level": "raid5f", 00:13:59.401 "superblock": true, 00:13:59.401 "num_base_bdevs": 4, 00:13:59.401 "num_base_bdevs_discovered": 4, 00:13:59.401 "num_base_bdevs_operational": 4, 00:13:59.401 "process": { 00:13:59.401 "type": "rebuild", 00:13:59.401 "target": "spare", 00:13:59.401 "progress": { 00:13:59.401 "blocks": 188160, 00:13:59.401 "percent": 98 00:13:59.401 } 00:13:59.401 }, 00:13:59.401 "base_bdevs_list": [ 00:13:59.401 { 00:13:59.401 "name": "spare", 00:13:59.401 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:13:59.401 "is_configured": true, 00:13:59.401 "data_offset": 2048, 00:13:59.401 "data_size": 63488 00:13:59.401 }, 00:13:59.401 { 00:13:59.401 "name": "BaseBdev2", 00:13:59.401 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:13:59.401 "is_configured": true, 00:13:59.401 "data_offset": 2048, 00:13:59.401 "data_size": 63488 00:13:59.401 }, 00:13:59.401 { 00:13:59.401 "name": "BaseBdev3", 00:13:59.401 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:13:59.401 "is_configured": true, 00:13:59.401 "data_offset": 2048, 00:13:59.401 "data_size": 63488 00:13:59.401 }, 00:13:59.401 { 00:13:59.401 "name": "BaseBdev4", 00:13:59.401 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:13:59.401 "is_configured": true, 00:13:59.401 "data_offset": 2048, 00:13:59.401 "data_size": 63488 00:13:59.401 } 00:13:59.401 ] 00:13:59.401 }' 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.401 09:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.401 09:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.401 09:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.659 [2024-10-30 09:48:38.064130] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.659 [2024-10-30 09:48:38.064269] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.659 [2024-10-30 09:48:38.064371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.592 "name": "raid_bdev1", 00:14:00.592 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:00.592 "strip_size_kb": 64, 00:14:00.592 "state": "online", 00:14:00.592 "raid_level": "raid5f", 00:14:00.592 "superblock": true, 00:14:00.592 "num_base_bdevs": 4, 00:14:00.592 "num_base_bdevs_discovered": 4, 00:14:00.592 "num_base_bdevs_operational": 4, 00:14:00.592 "base_bdevs_list": [ 00:14:00.592 { 00:14:00.592 "name": "spare", 00:14:00.592 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev2", 00:14:00.592 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev3", 00:14:00.592 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev4", 00:14:00.592 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 } 00:14:00.592 ] 00:14:00.592 }' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.592 "name": "raid_bdev1", 00:14:00.592 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:00.592 "strip_size_kb": 64, 00:14:00.592 "state": "online", 00:14:00.592 "raid_level": "raid5f", 00:14:00.592 "superblock": true, 00:14:00.592 "num_base_bdevs": 4, 00:14:00.592 "num_base_bdevs_discovered": 4, 00:14:00.592 "num_base_bdevs_operational": 4, 00:14:00.592 "base_bdevs_list": [ 00:14:00.592 { 00:14:00.592 "name": "spare", 00:14:00.592 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev2", 00:14:00.592 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev3", 00:14:00.592 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 }, 00:14:00.592 { 00:14:00.592 "name": "BaseBdev4", 00:14:00.592 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:00.592 "is_configured": true, 00:14:00.592 "data_offset": 2048, 00:14:00.592 "data_size": 63488 00:14:00.592 } 00:14:00.592 ] 00:14:00.592 }' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.592 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.851 "name": "raid_bdev1", 00:14:00.851 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:00.851 "strip_size_kb": 64, 00:14:00.851 "state": "online", 00:14:00.851 "raid_level": "raid5f", 00:14:00.851 "superblock": true, 00:14:00.851 "num_base_bdevs": 4, 00:14:00.851 "num_base_bdevs_discovered": 4, 00:14:00.851 "num_base_bdevs_operational": 4, 00:14:00.851 "base_bdevs_list": [ 00:14:00.851 { 00:14:00.851 "name": "spare", 00:14:00.851 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:00.851 "is_configured": true, 00:14:00.851 "data_offset": 2048, 00:14:00.851 "data_size": 63488 00:14:00.851 }, 00:14:00.851 { 00:14:00.851 "name": "BaseBdev2", 00:14:00.851 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:00.851 "is_configured": true, 00:14:00.851 "data_offset": 2048, 00:14:00.851 "data_size": 63488 00:14:00.851 }, 00:14:00.851 { 00:14:00.851 "name": "BaseBdev3", 00:14:00.851 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:00.851 "is_configured": true, 00:14:00.851 "data_offset": 2048, 00:14:00.851 "data_size": 63488 00:14:00.851 }, 00:14:00.851 { 00:14:00.851 "name": "BaseBdev4", 00:14:00.851 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:00.851 "is_configured": true, 00:14:00.851 "data_offset": 2048, 00:14:00.851 "data_size": 63488 00:14:00.851 } 00:14:00.851 ] 00:14:00.851 }' 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.851 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.110 [2024-10-30 09:48:39.528303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.110 [2024-10-30 09:48:39.528333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.110 [2024-10-30 09:48:39.528393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.110 [2024-10-30 09:48:39.528471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.110 [2024-10-30 09:48:39.528480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.110 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:01.367 /dev/nbd0 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.367 1+0 records in 00:14:01.367 1+0 records out 00:14:01.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334294 s, 12.3 MB/s 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.367 09:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:01.625 /dev/nbd1 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.625 1+0 records in 00:14:01.625 1+0 records out 00:14:01.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263506 s, 15.5 MB/s 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:01.625 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.626 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.884 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.143 [2024-10-30 09:48:40.574034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.143 [2024-10-30 09:48:40.574094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.143 [2024-10-30 09:48:40.574113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:02.143 [2024-10-30 09:48:40.574121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.143 [2024-10-30 09:48:40.575946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.143 [2024-10-30 09:48:40.575978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.143 [2024-10-30 09:48:40.576048] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.143 [2024-10-30 09:48:40.576095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.143 [2024-10-30 09:48:40.576203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.143 [2024-10-30 09:48:40.576272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.143 [2024-10-30 09:48:40.576331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.143 spare 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.143 [2024-10-30 09:48:40.676407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.143 [2024-10-30 09:48:40.676433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.143 [2024-10-30 09:48:40.676664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:14:02.143 [2024-10-30 09:48:40.680286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.143 [2024-10-30 09:48:40.680301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.143 [2024-10-30 09:48:40.680441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.143 "name": "raid_bdev1", 00:14:02.143 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:02.143 "strip_size_kb": 64, 00:14:02.143 "state": "online", 00:14:02.143 "raid_level": "raid5f", 00:14:02.143 "superblock": true, 00:14:02.143 "num_base_bdevs": 4, 00:14:02.143 "num_base_bdevs_discovered": 4, 00:14:02.143 "num_base_bdevs_operational": 4, 00:14:02.143 "base_bdevs_list": [ 00:14:02.143 { 00:14:02.143 "name": "spare", 00:14:02.143 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:02.143 "is_configured": true, 00:14:02.143 "data_offset": 2048, 00:14:02.143 "data_size": 63488 00:14:02.143 }, 00:14:02.143 { 00:14:02.143 "name": "BaseBdev2", 00:14:02.143 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:02.143 "is_configured": true, 00:14:02.143 "data_offset": 2048, 00:14:02.143 "data_size": 63488 00:14:02.143 }, 00:14:02.143 { 00:14:02.143 "name": "BaseBdev3", 00:14:02.143 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:02.143 "is_configured": true, 00:14:02.143 "data_offset": 2048, 00:14:02.143 "data_size": 63488 00:14:02.143 }, 00:14:02.143 { 00:14:02.143 "name": "BaseBdev4", 00:14:02.143 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:02.143 "is_configured": true, 00:14:02.143 "data_offset": 2048, 00:14:02.143 "data_size": 63488 00:14:02.143 } 00:14:02.143 ] 00:14:02.143 }' 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.143 09:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.402 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.661 "name": "raid_bdev1", 00:14:02.661 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:02.661 "strip_size_kb": 64, 00:14:02.661 "state": "online", 00:14:02.661 "raid_level": "raid5f", 00:14:02.661 "superblock": true, 00:14:02.661 "num_base_bdevs": 4, 00:14:02.661 "num_base_bdevs_discovered": 4, 00:14:02.661 "num_base_bdevs_operational": 4, 00:14:02.661 "base_bdevs_list": [ 00:14:02.661 { 00:14:02.661 "name": "spare", 00:14:02.661 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev2", 00:14:02.661 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev3", 00:14:02.661 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev4", 00:14:02.661 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 } 00:14:02.661 ] 00:14:02.661 }' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 [2024-10-30 09:48:41.144636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.661 "name": "raid_bdev1", 00:14:02.661 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:02.661 "strip_size_kb": 64, 00:14:02.661 "state": "online", 00:14:02.661 "raid_level": "raid5f", 00:14:02.661 "superblock": true, 00:14:02.661 "num_base_bdevs": 4, 00:14:02.661 "num_base_bdevs_discovered": 3, 00:14:02.661 "num_base_bdevs_operational": 3, 00:14:02.661 "base_bdevs_list": [ 00:14:02.661 { 00:14:02.661 "name": null, 00:14:02.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.661 "is_configured": false, 00:14:02.661 "data_offset": 0, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev2", 00:14:02.661 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev3", 00:14:02.661 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev4", 00:14:02.661 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 } 00:14:02.661 ] 00:14:02.661 }' 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.661 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.919 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.919 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.919 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.919 [2024-10-30 09:48:41.464728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.919 [2024-10-30 09:48:41.464865] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.919 [2024-10-30 09:48:41.464880] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:02.919 [2024-10-30 09:48:41.464912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.919 [2024-10-30 09:48:41.472144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:14:02.919 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.919 09:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:02.919 [2024-10-30 09:48:41.477230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.292 "name": "raid_bdev1", 00:14:04.292 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:04.292 "strip_size_kb": 64, 00:14:04.292 "state": "online", 00:14:04.292 "raid_level": "raid5f", 00:14:04.292 "superblock": true, 00:14:04.292 "num_base_bdevs": 4, 00:14:04.292 "num_base_bdevs_discovered": 4, 00:14:04.292 "num_base_bdevs_operational": 4, 00:14:04.292 "process": { 00:14:04.292 "type": "rebuild", 00:14:04.292 "target": "spare", 00:14:04.292 "progress": { 00:14:04.292 "blocks": 19200, 00:14:04.292 "percent": 10 00:14:04.292 } 00:14:04.292 }, 00:14:04.292 "base_bdevs_list": [ 00:14:04.292 { 00:14:04.292 "name": "spare", 00:14:04.292 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev2", 00:14:04.292 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev3", 00:14:04.292 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev4", 00:14:04.292 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 } 00:14:04.292 ] 00:14:04.292 }' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.292 [2024-10-30 09:48:42.577875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.292 [2024-10-30 09:48:42.583877] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.292 [2024-10-30 09:48:42.583930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.292 [2024-10-30 09:48:42.583945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.292 [2024-10-30 09:48:42.583952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.292 "name": "raid_bdev1", 00:14:04.292 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:04.292 "strip_size_kb": 64, 00:14:04.292 "state": "online", 00:14:04.292 "raid_level": "raid5f", 00:14:04.292 "superblock": true, 00:14:04.292 "num_base_bdevs": 4, 00:14:04.292 "num_base_bdevs_discovered": 3, 00:14:04.292 "num_base_bdevs_operational": 3, 00:14:04.292 "base_bdevs_list": [ 00:14:04.292 { 00:14:04.292 "name": null, 00:14:04.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.292 "is_configured": false, 00:14:04.292 "data_offset": 0, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev2", 00:14:04.292 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev3", 00:14:04.292 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 }, 00:14:04.292 { 00:14:04.292 "name": "BaseBdev4", 00:14:04.292 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:04.292 "is_configured": true, 00:14:04.292 "data_offset": 2048, 00:14:04.292 "data_size": 63488 00:14:04.292 } 00:14:04.292 ] 00:14:04.292 }' 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.292 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.292 [2024-10-30 09:48:42.904357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.292 [2024-10-30 09:48:42.904412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.292 [2024-10-30 09:48:42.904435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:04.292 [2024-10-30 09:48:42.904445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.292 [2024-10-30 09:48:42.904821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.292 [2024-10-30 09:48:42.904847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.292 [2024-10-30 09:48:42.904920] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.292 [2024-10-30 09:48:42.904940] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:04.292 [2024-10-30 09:48:42.904948] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:04.292 [2024-10-30 09:48:42.904972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.550 [2024-10-30 09:48:42.912713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:14:04.550 spare 00:14:04.550 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.550 09:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:04.550 [2024-10-30 09:48:42.917935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.484 "name": "raid_bdev1", 00:14:05.484 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:05.484 "strip_size_kb": 64, 00:14:05.484 "state": "online", 00:14:05.484 "raid_level": "raid5f", 00:14:05.484 "superblock": true, 00:14:05.484 "num_base_bdevs": 4, 00:14:05.484 "num_base_bdevs_discovered": 4, 00:14:05.484 "num_base_bdevs_operational": 4, 00:14:05.484 "process": { 00:14:05.484 "type": "rebuild", 00:14:05.484 "target": "spare", 00:14:05.484 "progress": { 00:14:05.484 "blocks": 19200, 00:14:05.484 "percent": 10 00:14:05.484 } 00:14:05.484 }, 00:14:05.484 "base_bdevs_list": [ 00:14:05.484 { 00:14:05.484 "name": "spare", 00:14:05.484 "uuid": "a9ac6bc1-eeac-528f-b3a4-1574b03e612c", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev2", 00:14:05.484 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev3", 00:14:05.484 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev4", 00:14:05.484 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 } 00:14:05.484 ] 00:14:05.484 }' 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.484 09:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.484 [2024-10-30 09:48:44.010611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.484 [2024-10-30 09:48:44.024690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.484 [2024-10-30 09:48:44.024736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.484 [2024-10-30 09:48:44.024752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.484 [2024-10-30 09:48:44.024758] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.484 "name": "raid_bdev1", 00:14:05.484 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:05.484 "strip_size_kb": 64, 00:14:05.484 "state": "online", 00:14:05.484 "raid_level": "raid5f", 00:14:05.484 "superblock": true, 00:14:05.484 "num_base_bdevs": 4, 00:14:05.484 "num_base_bdevs_discovered": 3, 00:14:05.484 "num_base_bdevs_operational": 3, 00:14:05.484 "base_bdevs_list": [ 00:14:05.484 { 00:14:05.484 "name": null, 00:14:05.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.484 "is_configured": false, 00:14:05.484 "data_offset": 0, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev2", 00:14:05.484 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev3", 00:14:05.484 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 }, 00:14:05.484 { 00:14:05.484 "name": "BaseBdev4", 00:14:05.484 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:05.484 "is_configured": true, 00:14:05.484 "data_offset": 2048, 00:14:05.484 "data_size": 63488 00:14:05.484 } 00:14:05.484 ] 00:14:05.484 }' 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.484 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.742 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.742 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.743 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.001 "name": "raid_bdev1", 00:14:06.001 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:06.001 "strip_size_kb": 64, 00:14:06.001 "state": "online", 00:14:06.001 "raid_level": "raid5f", 00:14:06.001 "superblock": true, 00:14:06.001 "num_base_bdevs": 4, 00:14:06.001 "num_base_bdevs_discovered": 3, 00:14:06.001 "num_base_bdevs_operational": 3, 00:14:06.001 "base_bdevs_list": [ 00:14:06.001 { 00:14:06.001 "name": null, 00:14:06.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.001 "is_configured": false, 00:14:06.001 "data_offset": 0, 00:14:06.001 "data_size": 63488 00:14:06.001 }, 00:14:06.001 { 00:14:06.001 "name": "BaseBdev2", 00:14:06.001 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:06.001 "is_configured": true, 00:14:06.001 "data_offset": 2048, 00:14:06.001 "data_size": 63488 00:14:06.001 }, 00:14:06.001 { 00:14:06.001 "name": "BaseBdev3", 00:14:06.001 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:06.001 "is_configured": true, 00:14:06.001 "data_offset": 2048, 00:14:06.001 "data_size": 63488 00:14:06.001 }, 00:14:06.001 { 00:14:06.001 "name": "BaseBdev4", 00:14:06.001 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:06.001 "is_configured": true, 00:14:06.001 "data_offset": 2048, 00:14:06.001 "data_size": 63488 00:14:06.001 } 00:14:06.001 ] 00:14:06.001 }' 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.001 [2024-10-30 09:48:44.457051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.001 [2024-10-30 09:48:44.457104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.001 [2024-10-30 09:48:44.457121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:06.001 [2024-10-30 09:48:44.457129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.001 [2024-10-30 09:48:44.457497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.001 [2024-10-30 09:48:44.457518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.001 [2024-10-30 09:48:44.457578] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:06.001 [2024-10-30 09:48:44.457589] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:06.001 [2024-10-30 09:48:44.457597] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:06.001 [2024-10-30 09:48:44.457605] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:06.001 BaseBdev1 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.001 09:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.935 "name": "raid_bdev1", 00:14:06.935 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:06.935 "strip_size_kb": 64, 00:14:06.935 "state": "online", 00:14:06.935 "raid_level": "raid5f", 00:14:06.935 "superblock": true, 00:14:06.935 "num_base_bdevs": 4, 00:14:06.935 "num_base_bdevs_discovered": 3, 00:14:06.935 "num_base_bdevs_operational": 3, 00:14:06.935 "base_bdevs_list": [ 00:14:06.935 { 00:14:06.935 "name": null, 00:14:06.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.935 "is_configured": false, 00:14:06.935 "data_offset": 0, 00:14:06.935 "data_size": 63488 00:14:06.935 }, 00:14:06.935 { 00:14:06.935 "name": "BaseBdev2", 00:14:06.935 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:06.935 "is_configured": true, 00:14:06.935 "data_offset": 2048, 00:14:06.935 "data_size": 63488 00:14:06.935 }, 00:14:06.935 { 00:14:06.935 "name": "BaseBdev3", 00:14:06.935 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:06.935 "is_configured": true, 00:14:06.935 "data_offset": 2048, 00:14:06.935 "data_size": 63488 00:14:06.935 }, 00:14:06.935 { 00:14:06.935 "name": "BaseBdev4", 00:14:06.935 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:06.935 "is_configured": true, 00:14:06.935 "data_offset": 2048, 00:14:06.935 "data_size": 63488 00:14:06.935 } 00:14:06.935 ] 00:14:06.935 }' 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.935 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.193 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.194 "name": "raid_bdev1", 00:14:07.194 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:07.194 "strip_size_kb": 64, 00:14:07.194 "state": "online", 00:14:07.194 "raid_level": "raid5f", 00:14:07.194 "superblock": true, 00:14:07.194 "num_base_bdevs": 4, 00:14:07.194 "num_base_bdevs_discovered": 3, 00:14:07.194 "num_base_bdevs_operational": 3, 00:14:07.194 "base_bdevs_list": [ 00:14:07.194 { 00:14:07.194 "name": null, 00:14:07.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.194 "is_configured": false, 00:14:07.194 "data_offset": 0, 00:14:07.194 "data_size": 63488 00:14:07.194 }, 00:14:07.194 { 00:14:07.194 "name": "BaseBdev2", 00:14:07.194 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:07.194 "is_configured": true, 00:14:07.194 "data_offset": 2048, 00:14:07.194 "data_size": 63488 00:14:07.194 }, 00:14:07.194 { 00:14:07.194 "name": "BaseBdev3", 00:14:07.194 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:07.194 "is_configured": true, 00:14:07.194 "data_offset": 2048, 00:14:07.194 "data_size": 63488 00:14:07.194 }, 00:14:07.194 { 00:14:07.194 "name": "BaseBdev4", 00:14:07.194 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:07.194 "is_configured": true, 00:14:07.194 "data_offset": 2048, 00:14:07.194 "data_size": 63488 00:14:07.194 } 00:14:07.194 ] 00:14:07.194 }' 00:14:07.194 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.452 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.452 [2024-10-30 09:48:45.873381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.452 [2024-10-30 09:48:45.873506] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.452 [2024-10-30 09:48:45.873520] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.452 request: 00:14:07.452 { 00:14:07.453 "base_bdev": "BaseBdev1", 00:14:07.453 "raid_bdev": "raid_bdev1", 00:14:07.453 "method": "bdev_raid_add_base_bdev", 00:14:07.453 "req_id": 1 00:14:07.453 } 00:14:07.453 Got JSON-RPC error response 00:14:07.453 response: 00:14:07.453 { 00:14:07.453 "code": -22, 00:14:07.453 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:07.453 } 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.453 09:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.387 "name": "raid_bdev1", 00:14:08.387 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:08.387 "strip_size_kb": 64, 00:14:08.387 "state": "online", 00:14:08.387 "raid_level": "raid5f", 00:14:08.387 "superblock": true, 00:14:08.387 "num_base_bdevs": 4, 00:14:08.387 "num_base_bdevs_discovered": 3, 00:14:08.387 "num_base_bdevs_operational": 3, 00:14:08.387 "base_bdevs_list": [ 00:14:08.387 { 00:14:08.387 "name": null, 00:14:08.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.387 "is_configured": false, 00:14:08.387 "data_offset": 0, 00:14:08.387 "data_size": 63488 00:14:08.387 }, 00:14:08.387 { 00:14:08.387 "name": "BaseBdev2", 00:14:08.387 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:08.387 "is_configured": true, 00:14:08.387 "data_offset": 2048, 00:14:08.387 "data_size": 63488 00:14:08.387 }, 00:14:08.387 { 00:14:08.387 "name": "BaseBdev3", 00:14:08.387 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:08.387 "is_configured": true, 00:14:08.387 "data_offset": 2048, 00:14:08.387 "data_size": 63488 00:14:08.387 }, 00:14:08.387 { 00:14:08.387 "name": "BaseBdev4", 00:14:08.387 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:08.387 "is_configured": true, 00:14:08.387 "data_offset": 2048, 00:14:08.387 "data_size": 63488 00:14:08.387 } 00:14:08.387 ] 00:14:08.387 }' 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.387 09:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.647 "name": "raid_bdev1", 00:14:08.647 "uuid": "277ca522-e99b-4b95-aa0a-8d3ff4548c1a", 00:14:08.647 "strip_size_kb": 64, 00:14:08.647 "state": "online", 00:14:08.647 "raid_level": "raid5f", 00:14:08.647 "superblock": true, 00:14:08.647 "num_base_bdevs": 4, 00:14:08.647 "num_base_bdevs_discovered": 3, 00:14:08.647 "num_base_bdevs_operational": 3, 00:14:08.647 "base_bdevs_list": [ 00:14:08.647 { 00:14:08.647 "name": null, 00:14:08.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.647 "is_configured": false, 00:14:08.647 "data_offset": 0, 00:14:08.647 "data_size": 63488 00:14:08.647 }, 00:14:08.647 { 00:14:08.647 "name": "BaseBdev2", 00:14:08.647 "uuid": "66ba0454-405a-502d-90cb-2c0398a2663b", 00:14:08.647 "is_configured": true, 00:14:08.647 "data_offset": 2048, 00:14:08.647 "data_size": 63488 00:14:08.647 }, 00:14:08.647 { 00:14:08.647 "name": "BaseBdev3", 00:14:08.647 "uuid": "410ff564-1e2b-5071-9089-a7e2ea6e725f", 00:14:08.647 "is_configured": true, 00:14:08.647 "data_offset": 2048, 00:14:08.647 "data_size": 63488 00:14:08.647 }, 00:14:08.647 { 00:14:08.647 "name": "BaseBdev4", 00:14:08.647 "uuid": "eaebb7b4-2c7f-51ee-b12a-ec65c0d11d4c", 00:14:08.647 "is_configured": true, 00:14:08.647 "data_offset": 2048, 00:14:08.647 "data_size": 63488 00:14:08.647 } 00:14:08.647 ] 00:14:08.647 }' 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.647 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82716 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82716 ']' 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82716 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82716 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82716' 00:14:08.906 killing process with pid 82716 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82716 00:14:08.906 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.906 00:14:08.906 Latency(us) 00:14:08.906 [2024-10-30T09:48:47.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.906 [2024-10-30T09:48:47.526Z] =================================================================================================================== 00:14:08.906 [2024-10-30T09:48:47.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.906 [2024-10-30 09:48:47.323002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.906 09:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82716 00:14:08.906 [2024-10-30 09:48:47.323112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.906 [2024-10-30 09:48:47.323175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.906 [2024-10-30 09:48:47.323188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:09.164 [2024-10-30 09:48:47.561974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.731 09:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.731 00:14:09.731 real 0m24.465s 00:14:09.731 user 0m29.680s 00:14:09.731 sys 0m2.169s 00:14:09.731 09:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.731 09:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.731 ************************************ 00:14:09.731 END TEST raid5f_rebuild_test_sb 00:14:09.731 ************************************ 00:14:09.731 09:48:48 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:09.731 09:48:48 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:09.731 09:48:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:09.731 09:48:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.731 09:48:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.731 ************************************ 00:14:09.731 START TEST raid_state_function_test_sb_4k 00:14:09.731 ************************************ 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83511 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83511' 00:14:09.731 Process raid pid: 83511 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83511 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 83511 ']' 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:09.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:09.731 09:48:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:09.731 [2024-10-30 09:48:48.235957] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:09.731 [2024-10-30 09:48:48.236080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.990 [2024-10-30 09:48:48.391821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.990 [2024-10-30 09:48:48.477403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.990 [2024-10-30 09:48:48.590029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.990 [2024-10-30 09:48:48.590065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.556 [2024-10-30 09:48:49.077151] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.556 [2024-10-30 09:48:49.077193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.556 [2024-10-30 09:48:49.077202] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.556 [2024-10-30 09:48:49.077210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.556 "name": "Existed_Raid", 00:14:10.556 "uuid": "4526c414-ed2a-473a-a99d-ee78968d896d", 00:14:10.556 "strip_size_kb": 0, 00:14:10.556 "state": "configuring", 00:14:10.556 "raid_level": "raid1", 00:14:10.556 "superblock": true, 00:14:10.556 "num_base_bdevs": 2, 00:14:10.556 "num_base_bdevs_discovered": 0, 00:14:10.556 "num_base_bdevs_operational": 2, 00:14:10.556 "base_bdevs_list": [ 00:14:10.556 { 00:14:10.556 "name": "BaseBdev1", 00:14:10.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.556 "is_configured": false, 00:14:10.556 "data_offset": 0, 00:14:10.556 "data_size": 0 00:14:10.556 }, 00:14:10.556 { 00:14:10.556 "name": "BaseBdev2", 00:14:10.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.556 "is_configured": false, 00:14:10.556 "data_offset": 0, 00:14:10.556 "data_size": 0 00:14:10.556 } 00:14:10.556 ] 00:14:10.556 }' 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.556 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.814 [2024-10-30 09:48:49.397169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.814 [2024-10-30 09:48:49.397200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.814 [2024-10-30 09:48:49.405178] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.814 [2024-10-30 09:48:49.405210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.814 [2024-10-30 09:48:49.405217] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.814 [2024-10-30 09:48:49.405226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.814 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.814 [2024-10-30 09:48:49.433271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.073 BaseBdev1 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.073 [ 00:14:11.073 { 00:14:11.073 "name": "BaseBdev1", 00:14:11.073 "aliases": [ 00:14:11.073 "ff879f66-b19d-4943-a1d7-11b96712de25" 00:14:11.073 ], 00:14:11.073 "product_name": "Malloc disk", 00:14:11.073 "block_size": 4096, 00:14:11.073 "num_blocks": 8192, 00:14:11.073 "uuid": "ff879f66-b19d-4943-a1d7-11b96712de25", 00:14:11.073 "assigned_rate_limits": { 00:14:11.073 "rw_ios_per_sec": 0, 00:14:11.073 "rw_mbytes_per_sec": 0, 00:14:11.073 "r_mbytes_per_sec": 0, 00:14:11.073 "w_mbytes_per_sec": 0 00:14:11.073 }, 00:14:11.073 "claimed": true, 00:14:11.073 "claim_type": "exclusive_write", 00:14:11.073 "zoned": false, 00:14:11.073 "supported_io_types": { 00:14:11.073 "read": true, 00:14:11.073 "write": true, 00:14:11.073 "unmap": true, 00:14:11.073 "flush": true, 00:14:11.073 "reset": true, 00:14:11.073 "nvme_admin": false, 00:14:11.073 "nvme_io": false, 00:14:11.073 "nvme_io_md": false, 00:14:11.073 "write_zeroes": true, 00:14:11.073 "zcopy": true, 00:14:11.073 "get_zone_info": false, 00:14:11.073 "zone_management": false, 00:14:11.073 "zone_append": false, 00:14:11.073 "compare": false, 00:14:11.073 "compare_and_write": false, 00:14:11.073 "abort": true, 00:14:11.073 "seek_hole": false, 00:14:11.073 "seek_data": false, 00:14:11.073 "copy": true, 00:14:11.073 "nvme_iov_md": false 00:14:11.073 }, 00:14:11.073 "memory_domains": [ 00:14:11.073 { 00:14:11.073 "dma_device_id": "system", 00:14:11.073 "dma_device_type": 1 00:14:11.073 }, 00:14:11.073 { 00:14:11.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.073 "dma_device_type": 2 00:14:11.073 } 00:14:11.073 ], 00:14:11.073 "driver_specific": {} 00:14:11.073 } 00:14:11.073 ] 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.073 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.074 "name": "Existed_Raid", 00:14:11.074 "uuid": "d127324e-3374-46a3-972e-27589f16062a", 00:14:11.074 "strip_size_kb": 0, 00:14:11.074 "state": "configuring", 00:14:11.074 "raid_level": "raid1", 00:14:11.074 "superblock": true, 00:14:11.074 "num_base_bdevs": 2, 00:14:11.074 "num_base_bdevs_discovered": 1, 00:14:11.074 "num_base_bdevs_operational": 2, 00:14:11.074 "base_bdevs_list": [ 00:14:11.074 { 00:14:11.074 "name": "BaseBdev1", 00:14:11.074 "uuid": "ff879f66-b19d-4943-a1d7-11b96712de25", 00:14:11.074 "is_configured": true, 00:14:11.074 "data_offset": 256, 00:14:11.074 "data_size": 7936 00:14:11.074 }, 00:14:11.074 { 00:14:11.074 "name": "BaseBdev2", 00:14:11.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.074 "is_configured": false, 00:14:11.074 "data_offset": 0, 00:14:11.074 "data_size": 0 00:14:11.074 } 00:14:11.074 ] 00:14:11.074 }' 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.074 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.331 [2024-10-30 09:48:49.761361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.331 [2024-10-30 09:48:49.761402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.331 [2024-10-30 09:48:49.769400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.331 [2024-10-30 09:48:49.770922] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.331 [2024-10-30 09:48:49.770959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.331 "name": "Existed_Raid", 00:14:11.331 "uuid": "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd", 00:14:11.331 "strip_size_kb": 0, 00:14:11.331 "state": "configuring", 00:14:11.331 "raid_level": "raid1", 00:14:11.331 "superblock": true, 00:14:11.331 "num_base_bdevs": 2, 00:14:11.331 "num_base_bdevs_discovered": 1, 00:14:11.331 "num_base_bdevs_operational": 2, 00:14:11.331 "base_bdevs_list": [ 00:14:11.331 { 00:14:11.331 "name": "BaseBdev1", 00:14:11.331 "uuid": "ff879f66-b19d-4943-a1d7-11b96712de25", 00:14:11.331 "is_configured": true, 00:14:11.331 "data_offset": 256, 00:14:11.331 "data_size": 7936 00:14:11.331 }, 00:14:11.331 { 00:14:11.331 "name": "BaseBdev2", 00:14:11.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.331 "is_configured": false, 00:14:11.331 "data_offset": 0, 00:14:11.331 "data_size": 0 00:14:11.331 } 00:14:11.331 ] 00:14:11.331 }' 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.331 09:48:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 [2024-10-30 09:48:50.099690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.588 [2024-10-30 09:48:50.099863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:11.588 [2024-10-30 09:48:50.099874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:11.588 BaseBdev2 00:14:11.588 [2024-10-30 09:48:50.100103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:11.588 [2024-10-30 09:48:50.100220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:11.588 [2024-10-30 09:48:50.100229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:11.588 [2024-10-30 09:48:50.100337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.588 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 [ 00:14:11.588 { 00:14:11.588 "name": "BaseBdev2", 00:14:11.588 "aliases": [ 00:14:11.589 "84eb95f2-ce78-47fb-8c1d-65c19370258f" 00:14:11.589 ], 00:14:11.589 "product_name": "Malloc disk", 00:14:11.589 "block_size": 4096, 00:14:11.589 "num_blocks": 8192, 00:14:11.589 "uuid": "84eb95f2-ce78-47fb-8c1d-65c19370258f", 00:14:11.589 "assigned_rate_limits": { 00:14:11.589 "rw_ios_per_sec": 0, 00:14:11.589 "rw_mbytes_per_sec": 0, 00:14:11.589 "r_mbytes_per_sec": 0, 00:14:11.589 "w_mbytes_per_sec": 0 00:14:11.589 }, 00:14:11.589 "claimed": true, 00:14:11.589 "claim_type": "exclusive_write", 00:14:11.589 "zoned": false, 00:14:11.589 "supported_io_types": { 00:14:11.589 "read": true, 00:14:11.589 "write": true, 00:14:11.589 "unmap": true, 00:14:11.589 "flush": true, 00:14:11.589 "reset": true, 00:14:11.589 "nvme_admin": false, 00:14:11.589 "nvme_io": false, 00:14:11.589 "nvme_io_md": false, 00:14:11.589 "write_zeroes": true, 00:14:11.589 "zcopy": true, 00:14:11.589 "get_zone_info": false, 00:14:11.589 "zone_management": false, 00:14:11.589 "zone_append": false, 00:14:11.589 "compare": false, 00:14:11.589 "compare_and_write": false, 00:14:11.589 "abort": true, 00:14:11.589 "seek_hole": false, 00:14:11.589 "seek_data": false, 00:14:11.589 "copy": true, 00:14:11.589 "nvme_iov_md": false 00:14:11.589 }, 00:14:11.589 "memory_domains": [ 00:14:11.589 { 00:14:11.589 "dma_device_id": "system", 00:14:11.589 "dma_device_type": 1 00:14:11.589 }, 00:14:11.589 { 00:14:11.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.589 "dma_device_type": 2 00:14:11.589 } 00:14:11.589 ], 00:14:11.589 "driver_specific": {} 00:14:11.589 } 00:14:11.589 ] 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.589 "name": "Existed_Raid", 00:14:11.589 "uuid": "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd", 00:14:11.589 "strip_size_kb": 0, 00:14:11.589 "state": "online", 00:14:11.589 "raid_level": "raid1", 00:14:11.589 "superblock": true, 00:14:11.589 "num_base_bdevs": 2, 00:14:11.589 "num_base_bdevs_discovered": 2, 00:14:11.589 "num_base_bdevs_operational": 2, 00:14:11.589 "base_bdevs_list": [ 00:14:11.589 { 00:14:11.589 "name": "BaseBdev1", 00:14:11.589 "uuid": "ff879f66-b19d-4943-a1d7-11b96712de25", 00:14:11.589 "is_configured": true, 00:14:11.589 "data_offset": 256, 00:14:11.589 "data_size": 7936 00:14:11.589 }, 00:14:11.589 { 00:14:11.589 "name": "BaseBdev2", 00:14:11.589 "uuid": "84eb95f2-ce78-47fb-8c1d-65c19370258f", 00:14:11.589 "is_configured": true, 00:14:11.589 "data_offset": 256, 00:14:11.589 "data_size": 7936 00:14:11.589 } 00:14:11.589 ] 00:14:11.589 }' 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.589 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.846 [2024-10-30 09:48:50.448022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.846 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.104 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.104 "name": "Existed_Raid", 00:14:12.104 "aliases": [ 00:14:12.104 "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd" 00:14:12.104 ], 00:14:12.104 "product_name": "Raid Volume", 00:14:12.104 "block_size": 4096, 00:14:12.104 "num_blocks": 7936, 00:14:12.104 "uuid": "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd", 00:14:12.104 "assigned_rate_limits": { 00:14:12.104 "rw_ios_per_sec": 0, 00:14:12.104 "rw_mbytes_per_sec": 0, 00:14:12.104 "r_mbytes_per_sec": 0, 00:14:12.104 "w_mbytes_per_sec": 0 00:14:12.104 }, 00:14:12.104 "claimed": false, 00:14:12.104 "zoned": false, 00:14:12.104 "supported_io_types": { 00:14:12.104 "read": true, 00:14:12.104 "write": true, 00:14:12.104 "unmap": false, 00:14:12.104 "flush": false, 00:14:12.104 "reset": true, 00:14:12.104 "nvme_admin": false, 00:14:12.104 "nvme_io": false, 00:14:12.104 "nvme_io_md": false, 00:14:12.104 "write_zeroes": true, 00:14:12.104 "zcopy": false, 00:14:12.104 "get_zone_info": false, 00:14:12.104 "zone_management": false, 00:14:12.104 "zone_append": false, 00:14:12.104 "compare": false, 00:14:12.104 "compare_and_write": false, 00:14:12.104 "abort": false, 00:14:12.104 "seek_hole": false, 00:14:12.104 "seek_data": false, 00:14:12.104 "copy": false, 00:14:12.104 "nvme_iov_md": false 00:14:12.104 }, 00:14:12.104 "memory_domains": [ 00:14:12.104 { 00:14:12.104 "dma_device_id": "system", 00:14:12.104 "dma_device_type": 1 00:14:12.104 }, 00:14:12.104 { 00:14:12.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.104 "dma_device_type": 2 00:14:12.104 }, 00:14:12.104 { 00:14:12.104 "dma_device_id": "system", 00:14:12.104 "dma_device_type": 1 00:14:12.104 }, 00:14:12.104 { 00:14:12.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.104 "dma_device_type": 2 00:14:12.104 } 00:14:12.104 ], 00:14:12.104 "driver_specific": { 00:14:12.104 "raid": { 00:14:12.104 "uuid": "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd", 00:14:12.104 "strip_size_kb": 0, 00:14:12.104 "state": "online", 00:14:12.104 "raid_level": "raid1", 00:14:12.104 "superblock": true, 00:14:12.104 "num_base_bdevs": 2, 00:14:12.104 "num_base_bdevs_discovered": 2, 00:14:12.104 "num_base_bdevs_operational": 2, 00:14:12.104 "base_bdevs_list": [ 00:14:12.104 { 00:14:12.104 "name": "BaseBdev1", 00:14:12.104 "uuid": "ff879f66-b19d-4943-a1d7-11b96712de25", 00:14:12.104 "is_configured": true, 00:14:12.104 "data_offset": 256, 00:14:12.104 "data_size": 7936 00:14:12.104 }, 00:14:12.104 { 00:14:12.104 "name": "BaseBdev2", 00:14:12.104 "uuid": "84eb95f2-ce78-47fb-8c1d-65c19370258f", 00:14:12.104 "is_configured": true, 00:14:12.104 "data_offset": 256, 00:14:12.104 "data_size": 7936 00:14:12.104 } 00:14:12.104 ] 00:14:12.104 } 00:14:12.104 } 00:14:12.104 }' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:12.105 BaseBdev2' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.105 [2024-10-30 09:48:50.595843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.105 "name": "Existed_Raid", 00:14:12.105 "uuid": "f3392c4f-f7ae-4ecc-abe8-4db4a310e4cd", 00:14:12.105 "strip_size_kb": 0, 00:14:12.105 "state": "online", 00:14:12.105 "raid_level": "raid1", 00:14:12.105 "superblock": true, 00:14:12.105 "num_base_bdevs": 2, 00:14:12.105 "num_base_bdevs_discovered": 1, 00:14:12.105 "num_base_bdevs_operational": 1, 00:14:12.105 "base_bdevs_list": [ 00:14:12.105 { 00:14:12.105 "name": null, 00:14:12.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.105 "is_configured": false, 00:14:12.105 "data_offset": 0, 00:14:12.105 "data_size": 7936 00:14:12.105 }, 00:14:12.105 { 00:14:12.105 "name": "BaseBdev2", 00:14:12.105 "uuid": "84eb95f2-ce78-47fb-8c1d-65c19370258f", 00:14:12.105 "is_configured": true, 00:14:12.105 "data_offset": 256, 00:14:12.105 "data_size": 7936 00:14:12.105 } 00:14:12.105 ] 00:14:12.105 }' 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.105 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.364 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.622 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.622 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.622 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:12.622 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.622 09:48:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.622 [2024-10-30 09:48:50.998010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.622 [2024-10-30 09:48:50.998098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.622 [2024-10-30 09:48:51.045013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.622 [2024-10-30 09:48:51.045048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.622 [2024-10-30 09:48:51.045074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83511 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 83511 ']' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 83511 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83511 00:14:12.622 killing process with pid 83511 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83511' 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 83511 00:14:12.622 [2024-10-30 09:48:51.105684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.622 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 83511 00:14:12.622 [2024-10-30 09:48:51.114095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.188 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:13.188 00:14:13.188 real 0m3.506s 00:14:13.188 user 0m5.146s 00:14:13.188 sys 0m0.560s 00:14:13.188 ************************************ 00:14:13.188 END TEST raid_state_function_test_sb_4k 00:14:13.188 ************************************ 00:14:13.188 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:13.188 09:48:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:13.188 09:48:51 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:13.188 09:48:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:13.189 09:48:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:13.189 09:48:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.189 ************************************ 00:14:13.189 START TEST raid_superblock_test_4k 00:14:13.189 ************************************ 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83751 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83751 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 83751 ']' 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:13.189 09:48:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:13.189 [2024-10-30 09:48:51.769786] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:13.189 [2024-10-30 09:48:51.769883] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83751 ] 00:14:13.447 [2024-10-30 09:48:51.918925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.447 [2024-10-30 09:48:51.998155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.704 [2024-10-30 09:48:52.104311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.704 [2024-10-30 09:48:52.104338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.270 malloc1 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.270 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.270 [2024-10-30 09:48:52.642559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.270 [2024-10-30 09:48:52.642712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.271 [2024-10-30 09:48:52.642734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.271 [2024-10-30 09:48:52.642742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.271 [2024-10-30 09:48:52.644418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.271 [2024-10-30 09:48:52.644447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.271 pt1 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.271 malloc2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.271 [2024-10-30 09:48:52.673395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.271 [2024-10-30 09:48:52.673518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.271 [2024-10-30 09:48:52.673538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.271 [2024-10-30 09:48:52.673545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.271 [2024-10-30 09:48:52.675195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.271 [2024-10-30 09:48:52.675221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.271 pt2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.271 [2024-10-30 09:48:52.681440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.271 [2024-10-30 09:48:52.682878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.271 [2024-10-30 09:48:52.683006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:14.271 [2024-10-30 09:48:52.683018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:14.271 [2024-10-30 09:48:52.683216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:14.271 [2024-10-30 09:48:52.683325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:14.271 [2024-10-30 09:48:52.683337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:14.271 [2024-10-30 09:48:52.683444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.271 "name": "raid_bdev1", 00:14:14.271 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:14.271 "strip_size_kb": 0, 00:14:14.271 "state": "online", 00:14:14.271 "raid_level": "raid1", 00:14:14.271 "superblock": true, 00:14:14.271 "num_base_bdevs": 2, 00:14:14.271 "num_base_bdevs_discovered": 2, 00:14:14.271 "num_base_bdevs_operational": 2, 00:14:14.271 "base_bdevs_list": [ 00:14:14.271 { 00:14:14.271 "name": "pt1", 00:14:14.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.271 "is_configured": true, 00:14:14.271 "data_offset": 256, 00:14:14.271 "data_size": 7936 00:14:14.271 }, 00:14:14.271 { 00:14:14.271 "name": "pt2", 00:14:14.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.271 "is_configured": true, 00:14:14.271 "data_offset": 256, 00:14:14.271 "data_size": 7936 00:14:14.271 } 00:14:14.271 ] 00:14:14.271 }' 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.271 09:48:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.530 09:48:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 [2024-10-30 09:48:53.009714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.530 "name": "raid_bdev1", 00:14:14.530 "aliases": [ 00:14:14.530 "3fbda913-eece-435d-a3c4-4a757f267e41" 00:14:14.530 ], 00:14:14.530 "product_name": "Raid Volume", 00:14:14.530 "block_size": 4096, 00:14:14.530 "num_blocks": 7936, 00:14:14.530 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:14.530 "assigned_rate_limits": { 00:14:14.530 "rw_ios_per_sec": 0, 00:14:14.530 "rw_mbytes_per_sec": 0, 00:14:14.530 "r_mbytes_per_sec": 0, 00:14:14.530 "w_mbytes_per_sec": 0 00:14:14.530 }, 00:14:14.530 "claimed": false, 00:14:14.530 "zoned": false, 00:14:14.530 "supported_io_types": { 00:14:14.530 "read": true, 00:14:14.530 "write": true, 00:14:14.530 "unmap": false, 00:14:14.530 "flush": false, 00:14:14.530 "reset": true, 00:14:14.530 "nvme_admin": false, 00:14:14.530 "nvme_io": false, 00:14:14.530 "nvme_io_md": false, 00:14:14.530 "write_zeroes": true, 00:14:14.530 "zcopy": false, 00:14:14.530 "get_zone_info": false, 00:14:14.530 "zone_management": false, 00:14:14.530 "zone_append": false, 00:14:14.530 "compare": false, 00:14:14.530 "compare_and_write": false, 00:14:14.530 "abort": false, 00:14:14.530 "seek_hole": false, 00:14:14.530 "seek_data": false, 00:14:14.530 "copy": false, 00:14:14.530 "nvme_iov_md": false 00:14:14.530 }, 00:14:14.530 "memory_domains": [ 00:14:14.530 { 00:14:14.530 "dma_device_id": "system", 00:14:14.530 "dma_device_type": 1 00:14:14.530 }, 00:14:14.530 { 00:14:14.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.530 "dma_device_type": 2 00:14:14.530 }, 00:14:14.530 { 00:14:14.530 "dma_device_id": "system", 00:14:14.530 "dma_device_type": 1 00:14:14.530 }, 00:14:14.530 { 00:14:14.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.530 "dma_device_type": 2 00:14:14.530 } 00:14:14.530 ], 00:14:14.530 "driver_specific": { 00:14:14.530 "raid": { 00:14:14.530 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:14.530 "strip_size_kb": 0, 00:14:14.530 "state": "online", 00:14:14.530 "raid_level": "raid1", 00:14:14.530 "superblock": true, 00:14:14.530 "num_base_bdevs": 2, 00:14:14.530 "num_base_bdevs_discovered": 2, 00:14:14.530 "num_base_bdevs_operational": 2, 00:14:14.530 "base_bdevs_list": [ 00:14:14.530 { 00:14:14.530 "name": "pt1", 00:14:14.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.530 "is_configured": true, 00:14:14.530 "data_offset": 256, 00:14:14.530 "data_size": 7936 00:14:14.530 }, 00:14:14.530 { 00:14:14.530 "name": "pt2", 00:14:14.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.530 "is_configured": true, 00:14:14.530 "data_offset": 256, 00:14:14.530 "data_size": 7936 00:14:14.530 } 00:14:14.530 ] 00:14:14.530 } 00:14:14.530 } 00:14:14.530 }' 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:14.530 pt2' 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.530 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.531 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:14.790 [2024-10-30 09:48:53.157710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3fbda913-eece-435d-a3c4-4a757f267e41 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 3fbda913-eece-435d-a3c4-4a757f267e41 ']' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 [2024-10-30 09:48:53.185481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.790 [2024-10-30 09:48:53.185497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.790 [2024-10-30 09:48:53.185550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.790 [2024-10-30 09:48:53.185597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.790 [2024-10-30 09:48:53.185606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.790 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 [2024-10-30 09:48:53.281519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:14.791 [2024-10-30 09:48:53.283026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:14.791 [2024-10-30 09:48:53.283086] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:14.791 [2024-10-30 09:48:53.283126] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:14.791 [2024-10-30 09:48:53.283137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.791 [2024-10-30 09:48:53.283145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:14.791 request: 00:14:14.791 { 00:14:14.791 "name": "raid_bdev1", 00:14:14.791 "raid_level": "raid1", 00:14:14.791 "base_bdevs": [ 00:14:14.791 "malloc1", 00:14:14.791 "malloc2" 00:14:14.791 ], 00:14:14.791 "superblock": false, 00:14:14.791 "method": "bdev_raid_create", 00:14:14.791 "req_id": 1 00:14:14.791 } 00:14:14.791 Got JSON-RPC error response 00:14:14.791 response: 00:14:14.791 { 00:14:14.791 "code": -17, 00:14:14.791 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:14.791 } 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 [2024-10-30 09:48:53.321514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.791 [2024-10-30 09:48:53.321549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.791 [2024-10-30 09:48:53.321561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:14.791 [2024-10-30 09:48:53.321569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.791 [2024-10-30 09:48:53.323273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.791 [2024-10-30 09:48:53.323303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.791 [2024-10-30 09:48:53.323356] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:14.791 [2024-10-30 09:48:53.323401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.791 pt1 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.791 "name": "raid_bdev1", 00:14:14.791 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:14.791 "strip_size_kb": 0, 00:14:14.791 "state": "configuring", 00:14:14.791 "raid_level": "raid1", 00:14:14.791 "superblock": true, 00:14:14.791 "num_base_bdevs": 2, 00:14:14.791 "num_base_bdevs_discovered": 1, 00:14:14.791 "num_base_bdevs_operational": 2, 00:14:14.791 "base_bdevs_list": [ 00:14:14.791 { 00:14:14.791 "name": "pt1", 00:14:14.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.791 "is_configured": true, 00:14:14.791 "data_offset": 256, 00:14:14.791 "data_size": 7936 00:14:14.791 }, 00:14:14.791 { 00:14:14.791 "name": null, 00:14:14.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.791 "is_configured": false, 00:14:14.791 "data_offset": 256, 00:14:14.791 "data_size": 7936 00:14:14.791 } 00:14:14.791 ] 00:14:14.791 }' 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.791 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.050 [2024-10-30 09:48:53.633601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.050 [2024-10-30 09:48:53.633653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.050 [2024-10-30 09:48:53.633667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:15.050 [2024-10-30 09:48:53.633676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.050 [2024-10-30 09:48:53.634030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.050 [2024-10-30 09:48:53.634048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.050 [2024-10-30 09:48:53.634119] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.050 [2024-10-30 09:48:53.634140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.050 [2024-10-30 09:48:53.634233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:15.050 [2024-10-30 09:48:53.634242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:15.050 [2024-10-30 09:48:53.634431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:15.050 [2024-10-30 09:48:53.634537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:15.050 [2024-10-30 09:48:53.634544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:15.050 [2024-10-30 09:48:53.634649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.050 pt2 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.050 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.309 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.309 "name": "raid_bdev1", 00:14:15.309 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:15.309 "strip_size_kb": 0, 00:14:15.309 "state": "online", 00:14:15.309 "raid_level": "raid1", 00:14:15.309 "superblock": true, 00:14:15.309 "num_base_bdevs": 2, 00:14:15.309 "num_base_bdevs_discovered": 2, 00:14:15.309 "num_base_bdevs_operational": 2, 00:14:15.309 "base_bdevs_list": [ 00:14:15.309 { 00:14:15.309 "name": "pt1", 00:14:15.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 256, 00:14:15.309 "data_size": 7936 00:14:15.309 }, 00:14:15.309 { 00:14:15.309 "name": "pt2", 00:14:15.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 256, 00:14:15.309 "data_size": 7936 00:14:15.309 } 00:14:15.309 ] 00:14:15.309 }' 00:14:15.309 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.309 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.567 [2024-10-30 09:48:53.961865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.567 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.567 "name": "raid_bdev1", 00:14:15.567 "aliases": [ 00:14:15.567 "3fbda913-eece-435d-a3c4-4a757f267e41" 00:14:15.567 ], 00:14:15.567 "product_name": "Raid Volume", 00:14:15.567 "block_size": 4096, 00:14:15.567 "num_blocks": 7936, 00:14:15.567 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:15.567 "assigned_rate_limits": { 00:14:15.567 "rw_ios_per_sec": 0, 00:14:15.567 "rw_mbytes_per_sec": 0, 00:14:15.567 "r_mbytes_per_sec": 0, 00:14:15.567 "w_mbytes_per_sec": 0 00:14:15.567 }, 00:14:15.567 "claimed": false, 00:14:15.567 "zoned": false, 00:14:15.567 "supported_io_types": { 00:14:15.567 "read": true, 00:14:15.567 "write": true, 00:14:15.567 "unmap": false, 00:14:15.567 "flush": false, 00:14:15.567 "reset": true, 00:14:15.567 "nvme_admin": false, 00:14:15.567 "nvme_io": false, 00:14:15.567 "nvme_io_md": false, 00:14:15.567 "write_zeroes": true, 00:14:15.567 "zcopy": false, 00:14:15.567 "get_zone_info": false, 00:14:15.567 "zone_management": false, 00:14:15.567 "zone_append": false, 00:14:15.567 "compare": false, 00:14:15.567 "compare_and_write": false, 00:14:15.567 "abort": false, 00:14:15.567 "seek_hole": false, 00:14:15.567 "seek_data": false, 00:14:15.567 "copy": false, 00:14:15.567 "nvme_iov_md": false 00:14:15.567 }, 00:14:15.567 "memory_domains": [ 00:14:15.567 { 00:14:15.567 "dma_device_id": "system", 00:14:15.567 "dma_device_type": 1 00:14:15.567 }, 00:14:15.567 { 00:14:15.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.567 "dma_device_type": 2 00:14:15.567 }, 00:14:15.567 { 00:14:15.567 "dma_device_id": "system", 00:14:15.567 "dma_device_type": 1 00:14:15.567 }, 00:14:15.567 { 00:14:15.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.568 "dma_device_type": 2 00:14:15.568 } 00:14:15.568 ], 00:14:15.568 "driver_specific": { 00:14:15.568 "raid": { 00:14:15.568 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:15.568 "strip_size_kb": 0, 00:14:15.568 "state": "online", 00:14:15.568 "raid_level": "raid1", 00:14:15.568 "superblock": true, 00:14:15.568 "num_base_bdevs": 2, 00:14:15.568 "num_base_bdevs_discovered": 2, 00:14:15.568 "num_base_bdevs_operational": 2, 00:14:15.568 "base_bdevs_list": [ 00:14:15.568 { 00:14:15.568 "name": "pt1", 00:14:15.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.568 "is_configured": true, 00:14:15.568 "data_offset": 256, 00:14:15.568 "data_size": 7936 00:14:15.568 }, 00:14:15.568 { 00:14:15.568 "name": "pt2", 00:14:15.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.568 "is_configured": true, 00:14:15.568 "data_offset": 256, 00:14:15.568 "data_size": 7936 00:14:15.568 } 00:14:15.568 ] 00:14:15.568 } 00:14:15.568 } 00:14:15.568 }' 00:14:15.568 09:48:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:15.568 pt2' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 [2024-10-30 09:48:54.117862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 3fbda913-eece-435d-a3c4-4a757f267e41 '!=' 3fbda913-eece-435d-a3c4-4a757f267e41 ']' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 [2024-10-30 09:48:54.145685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.568 "name": "raid_bdev1", 00:14:15.568 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:15.568 "strip_size_kb": 0, 00:14:15.568 "state": "online", 00:14:15.568 "raid_level": "raid1", 00:14:15.568 "superblock": true, 00:14:15.568 "num_base_bdevs": 2, 00:14:15.568 "num_base_bdevs_discovered": 1, 00:14:15.568 "num_base_bdevs_operational": 1, 00:14:15.568 "base_bdevs_list": [ 00:14:15.568 { 00:14:15.568 "name": null, 00:14:15.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.568 "is_configured": false, 00:14:15.568 "data_offset": 0, 00:14:15.568 "data_size": 7936 00:14:15.568 }, 00:14:15.568 { 00:14:15.568 "name": "pt2", 00:14:15.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.568 "is_configured": true, 00:14:15.568 "data_offset": 256, 00:14:15.568 "data_size": 7936 00:14:15.568 } 00:14:15.568 ] 00:14:15.568 }' 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.568 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 [2024-10-30 09:48:54.473735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.135 [2024-10-30 09:48:54.473755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.135 [2024-10-30 09:48:54.473806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.135 [2024-10-30 09:48:54.473841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.135 [2024-10-30 09:48:54.473849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 [2024-10-30 09:48:54.517740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:16.135 [2024-10-30 09:48:54.517782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.135 [2024-10-30 09:48:54.517794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:16.135 [2024-10-30 09:48:54.517803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.135 [2024-10-30 09:48:54.519569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.135 [2024-10-30 09:48:54.519600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:16.135 [2024-10-30 09:48:54.519658] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:16.135 [2024-10-30 09:48:54.519694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.135 [2024-10-30 09:48:54.519768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:16.135 [2024-10-30 09:48:54.519778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:16.135 [2024-10-30 09:48:54.519962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.135 [2024-10-30 09:48:54.520079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:16.135 [2024-10-30 09:48:54.520086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:16.135 [2024-10-30 09:48:54.520190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.135 pt2 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.135 "name": "raid_bdev1", 00:14:16.135 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:16.135 "strip_size_kb": 0, 00:14:16.135 "state": "online", 00:14:16.135 "raid_level": "raid1", 00:14:16.135 "superblock": true, 00:14:16.135 "num_base_bdevs": 2, 00:14:16.135 "num_base_bdevs_discovered": 1, 00:14:16.135 "num_base_bdevs_operational": 1, 00:14:16.135 "base_bdevs_list": [ 00:14:16.135 { 00:14:16.135 "name": null, 00:14:16.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.135 "is_configured": false, 00:14:16.135 "data_offset": 256, 00:14:16.135 "data_size": 7936 00:14:16.135 }, 00:14:16.135 { 00:14:16.135 "name": "pt2", 00:14:16.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.135 "is_configured": true, 00:14:16.135 "data_offset": 256, 00:14:16.135 "data_size": 7936 00:14:16.135 } 00:14:16.135 ] 00:14:16.135 }' 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.135 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.394 [2024-10-30 09:48:54.841790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.394 [2024-10-30 09:48:54.841812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.394 [2024-10-30 09:48:54.841866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.394 [2024-10-30 09:48:54.841902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.394 [2024-10-30 09:48:54.841910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.394 [2024-10-30 09:48:54.885805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:16.394 [2024-10-30 09:48:54.885847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.394 [2024-10-30 09:48:54.885862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:16.394 [2024-10-30 09:48:54.885868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.394 [2024-10-30 09:48:54.887655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.394 [2024-10-30 09:48:54.887684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:16.394 [2024-10-30 09:48:54.887745] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:16.394 [2024-10-30 09:48:54.887780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:16.394 [2024-10-30 09:48:54.887876] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:16.394 [2024-10-30 09:48:54.887884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.394 [2024-10-30 09:48:54.887896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:16.394 [2024-10-30 09:48:54.887936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.394 [2024-10-30 09:48:54.887990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:16.394 [2024-10-30 09:48:54.887997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:16.394 [2024-10-30 09:48:54.888206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:16.394 [2024-10-30 09:48:54.888311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:16.394 [2024-10-30 09:48:54.888319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:16.394 [2024-10-30 09:48:54.888423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.394 pt1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.394 "name": "raid_bdev1", 00:14:16.394 "uuid": "3fbda913-eece-435d-a3c4-4a757f267e41", 00:14:16.394 "strip_size_kb": 0, 00:14:16.394 "state": "online", 00:14:16.394 "raid_level": "raid1", 00:14:16.394 "superblock": true, 00:14:16.394 "num_base_bdevs": 2, 00:14:16.394 "num_base_bdevs_discovered": 1, 00:14:16.394 "num_base_bdevs_operational": 1, 00:14:16.394 "base_bdevs_list": [ 00:14:16.394 { 00:14:16.394 "name": null, 00:14:16.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.394 "is_configured": false, 00:14:16.394 "data_offset": 256, 00:14:16.394 "data_size": 7936 00:14:16.394 }, 00:14:16.394 { 00:14:16.394 "name": "pt2", 00:14:16.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.394 "is_configured": true, 00:14:16.394 "data_offset": 256, 00:14:16.394 "data_size": 7936 00:14:16.394 } 00:14:16.394 ] 00:14:16.394 }' 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.394 09:48:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.652 [2024-10-30 09:48:55.230039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 3fbda913-eece-435d-a3c4-4a757f267e41 '!=' 3fbda913-eece-435d-a3c4-4a757f267e41 ']' 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83751 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 83751 ']' 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 83751 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.652 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83751 00:14:16.911 killing process with pid 83751 00:14:16.911 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:16.911 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:16.911 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83751' 00:14:16.911 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 83751 00:14:16.911 [2024-10-30 09:48:55.275358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.911 [2024-10-30 09:48:55.275419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.911 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 83751 00:14:16.911 [2024-10-30 09:48:55.275454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.911 [2024-10-30 09:48:55.275466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:16.911 [2024-10-30 09:48:55.375567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.478 09:48:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:14:17.478 00:14:17.478 real 0m4.215s 00:14:17.478 user 0m6.546s 00:14:17.478 sys 0m0.652s 00:14:17.478 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:17.478 ************************************ 00:14:17.478 END TEST raid_superblock_test_4k 00:14:17.478 ************************************ 00:14:17.478 09:48:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:17.478 09:48:55 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:14:17.478 09:48:55 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:14:17.478 09:48:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:17.478 09:48:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:17.478 09:48:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.478 ************************************ 00:14:17.478 START TEST raid_rebuild_test_sb_4k 00:14:17.478 ************************************ 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=84053 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 84053 00:14:17.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 84053 ']' 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:17.478 09:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:17.478 [2024-10-30 09:48:56.038021] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:17.478 [2024-10-30 09:48:56.038151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84053 ] 00:14:17.478 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:17.478 Zero copy mechanism will not be used. 00:14:17.737 [2024-10-30 09:48:56.196581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.737 [2024-10-30 09:48:56.294707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.995 [2024-10-30 09:48:56.429898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.995 [2024-10-30 09:48:56.430115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.253 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 BaseBdev1_malloc 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 [2024-10-30 09:48:56.902198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:18.512 [2024-10-30 09:48:56.902260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.512 [2024-10-30 09:48:56.902280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:18.512 [2024-10-30 09:48:56.902292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.512 [2024-10-30 09:48:56.904409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.512 [2024-10-30 09:48:56.904447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:18.512 BaseBdev1 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 BaseBdev2_malloc 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 [2024-10-30 09:48:56.937982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:18.512 [2024-10-30 09:48:56.938163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.512 [2024-10-30 09:48:56.938186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:18.512 [2024-10-30 09:48:56.938197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.512 [2024-10-30 09:48:56.940261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.512 [2024-10-30 09:48:56.940292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:18.512 BaseBdev2 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 spare_malloc 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 spare_delay 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 [2024-10-30 09:48:56.998397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.512 [2024-10-30 09:48:56.998454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.512 [2024-10-30 09:48:56.998471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:18.512 [2024-10-30 09:48:56.998481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.512 [2024-10-30 09:48:57.000586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.512 [2024-10-30 09:48:57.000738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.512 spare 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 [2024-10-30 09:48:57.006445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.512 [2024-10-30 09:48:57.008263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.512 [2024-10-30 09:48:57.008426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:18.512 [2024-10-30 09:48:57.008442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:18.512 [2024-10-30 09:48:57.008686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:18.512 [2024-10-30 09:48:57.008831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:18.512 [2024-10-30 09:48:57.008840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:18.512 [2024-10-30 09:48:57.009020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.512 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.513 "name": "raid_bdev1", 00:14:18.513 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:18.513 "strip_size_kb": 0, 00:14:18.513 "state": "online", 00:14:18.513 "raid_level": "raid1", 00:14:18.513 "superblock": true, 00:14:18.513 "num_base_bdevs": 2, 00:14:18.513 "num_base_bdevs_discovered": 2, 00:14:18.513 "num_base_bdevs_operational": 2, 00:14:18.513 "base_bdevs_list": [ 00:14:18.513 { 00:14:18.513 "name": "BaseBdev1", 00:14:18.513 "uuid": "1098c220-be21-5c82-bb84-f5cb0a23de86", 00:14:18.513 "is_configured": true, 00:14:18.513 "data_offset": 256, 00:14:18.513 "data_size": 7936 00:14:18.513 }, 00:14:18.513 { 00:14:18.513 "name": "BaseBdev2", 00:14:18.513 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:18.513 "is_configured": true, 00:14:18.513 "data_offset": 256, 00:14:18.513 "data_size": 7936 00:14:18.513 } 00:14:18.513 ] 00:14:18.513 }' 00:14:18.513 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.513 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.771 [2024-10-30 09:48:57.310805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.771 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:18.772 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.772 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.772 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:19.030 [2024-10-30 09:48:57.550602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:19.030 /dev/nbd0 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.030 1+0 records in 00:14:19.030 1+0 records out 00:14:19.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224129 s, 18.3 MB/s 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:19.030 09:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:19.963 7936+0 records in 00:14:19.963 7936+0 records out 00:14:19.963 32505856 bytes (33 MB, 31 MiB) copied, 0.72823 s, 44.6 MB/s 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.963 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.964 [2024-10-30 09:48:58.537154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:19.964 [2024-10-30 09:48:58.545600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.964 "name": "raid_bdev1", 00:14:19.964 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:19.964 "strip_size_kb": 0, 00:14:19.964 "state": "online", 00:14:19.964 "raid_level": "raid1", 00:14:19.964 "superblock": true, 00:14:19.964 "num_base_bdevs": 2, 00:14:19.964 "num_base_bdevs_discovered": 1, 00:14:19.964 "num_base_bdevs_operational": 1, 00:14:19.964 "base_bdevs_list": [ 00:14:19.964 { 00:14:19.964 "name": null, 00:14:19.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.964 "is_configured": false, 00:14:19.964 "data_offset": 0, 00:14:19.964 "data_size": 7936 00:14:19.964 }, 00:14:19.964 { 00:14:19.964 "name": "BaseBdev2", 00:14:19.964 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:19.964 "is_configured": true, 00:14:19.964 "data_offset": 256, 00:14:19.964 "data_size": 7936 00:14:19.964 } 00:14:19.964 ] 00:14:19.964 }' 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.964 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.530 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.530 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.530 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.530 [2024-10-30 09:48:58.861699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.530 [2024-10-30 09:48:58.873308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:14:20.530 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.530 09:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:20.530 [2024-10-30 09:48:58.875287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.463 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.463 "name": "raid_bdev1", 00:14:21.463 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:21.463 "strip_size_kb": 0, 00:14:21.463 "state": "online", 00:14:21.463 "raid_level": "raid1", 00:14:21.463 "superblock": true, 00:14:21.463 "num_base_bdevs": 2, 00:14:21.463 "num_base_bdevs_discovered": 2, 00:14:21.463 "num_base_bdevs_operational": 2, 00:14:21.464 "process": { 00:14:21.464 "type": "rebuild", 00:14:21.464 "target": "spare", 00:14:21.464 "progress": { 00:14:21.464 "blocks": 2560, 00:14:21.464 "percent": 32 00:14:21.464 } 00:14:21.464 }, 00:14:21.464 "base_bdevs_list": [ 00:14:21.464 { 00:14:21.464 "name": "spare", 00:14:21.464 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:21.464 "is_configured": true, 00:14:21.464 "data_offset": 256, 00:14:21.464 "data_size": 7936 00:14:21.464 }, 00:14:21.464 { 00:14:21.464 "name": "BaseBdev2", 00:14:21.464 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:21.464 "is_configured": true, 00:14:21.464 "data_offset": 256, 00:14:21.464 "data_size": 7936 00:14:21.464 } 00:14:21.464 ] 00:14:21.464 }' 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 [2024-10-30 09:48:59.976924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.464 [2024-10-30 09:48:59.980425] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.464 [2024-10-30 09:48:59.980577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.464 [2024-10-30 09:48:59.980593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.464 [2024-10-30 09:48:59.980602] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.464 09:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.464 "name": "raid_bdev1", 00:14:21.464 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:21.464 "strip_size_kb": 0, 00:14:21.464 "state": "online", 00:14:21.464 "raid_level": "raid1", 00:14:21.464 "superblock": true, 00:14:21.464 "num_base_bdevs": 2, 00:14:21.464 "num_base_bdevs_discovered": 1, 00:14:21.464 "num_base_bdevs_operational": 1, 00:14:21.464 "base_bdevs_list": [ 00:14:21.464 { 00:14:21.464 "name": null, 00:14:21.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.464 "is_configured": false, 00:14:21.464 "data_offset": 0, 00:14:21.464 "data_size": 7936 00:14:21.464 }, 00:14:21.464 { 00:14:21.464 "name": "BaseBdev2", 00:14:21.464 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:21.464 "is_configured": true, 00:14:21.464 "data_offset": 256, 00:14:21.464 "data_size": 7936 00:14:21.464 } 00:14:21.464 ] 00:14:21.464 }' 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.464 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.721 "name": "raid_bdev1", 00:14:21.721 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:21.721 "strip_size_kb": 0, 00:14:21.721 "state": "online", 00:14:21.721 "raid_level": "raid1", 00:14:21.721 "superblock": true, 00:14:21.721 "num_base_bdevs": 2, 00:14:21.721 "num_base_bdevs_discovered": 1, 00:14:21.721 "num_base_bdevs_operational": 1, 00:14:21.721 "base_bdevs_list": [ 00:14:21.721 { 00:14:21.721 "name": null, 00:14:21.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.721 "is_configured": false, 00:14:21.721 "data_offset": 0, 00:14:21.721 "data_size": 7936 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "name": "BaseBdev2", 00:14:21.721 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:21.721 "is_configured": true, 00:14:21.721 "data_offset": 256, 00:14:21.721 "data_size": 7936 00:14:21.721 } 00:14:21.721 ] 00:14:21.721 }' 00:14:21.721 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.979 [2024-10-30 09:49:00.395647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.979 [2024-10-30 09:49:00.404746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.979 09:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.979 [2024-10-30 09:49:00.406302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.915 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.915 "name": "raid_bdev1", 00:14:22.915 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:22.915 "strip_size_kb": 0, 00:14:22.915 "state": "online", 00:14:22.916 "raid_level": "raid1", 00:14:22.916 "superblock": true, 00:14:22.916 "num_base_bdevs": 2, 00:14:22.916 "num_base_bdevs_discovered": 2, 00:14:22.916 "num_base_bdevs_operational": 2, 00:14:22.916 "process": { 00:14:22.916 "type": "rebuild", 00:14:22.916 "target": "spare", 00:14:22.916 "progress": { 00:14:22.916 "blocks": 2560, 00:14:22.916 "percent": 32 00:14:22.916 } 00:14:22.916 }, 00:14:22.916 "base_bdevs_list": [ 00:14:22.916 { 00:14:22.916 "name": "spare", 00:14:22.916 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:22.916 "is_configured": true, 00:14:22.916 "data_offset": 256, 00:14:22.916 "data_size": 7936 00:14:22.916 }, 00:14:22.916 { 00:14:22.916 "name": "BaseBdev2", 00:14:22.916 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:22.916 "is_configured": true, 00:14:22.916 "data_offset": 256, 00:14:22.916 "data_size": 7936 00:14:22.916 } 00:14:22.916 ] 00:14:22.916 }' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:22.916 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=536 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:22.916 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.175 "name": "raid_bdev1", 00:14:23.175 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:23.175 "strip_size_kb": 0, 00:14:23.175 "state": "online", 00:14:23.175 "raid_level": "raid1", 00:14:23.175 "superblock": true, 00:14:23.175 "num_base_bdevs": 2, 00:14:23.175 "num_base_bdevs_discovered": 2, 00:14:23.175 "num_base_bdevs_operational": 2, 00:14:23.175 "process": { 00:14:23.175 "type": "rebuild", 00:14:23.175 "target": "spare", 00:14:23.175 "progress": { 00:14:23.175 "blocks": 2816, 00:14:23.175 "percent": 35 00:14:23.175 } 00:14:23.175 }, 00:14:23.175 "base_bdevs_list": [ 00:14:23.175 { 00:14:23.175 "name": "spare", 00:14:23.175 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:23.175 "is_configured": true, 00:14:23.175 "data_offset": 256, 00:14:23.175 "data_size": 7936 00:14:23.175 }, 00:14:23.175 { 00:14:23.175 "name": "BaseBdev2", 00:14:23.175 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:23.175 "is_configured": true, 00:14:23.175 "data_offset": 256, 00:14:23.175 "data_size": 7936 00:14:23.175 } 00:14:23.175 ] 00:14:23.175 }' 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.175 09:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.109 "name": "raid_bdev1", 00:14:24.109 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:24.109 "strip_size_kb": 0, 00:14:24.109 "state": "online", 00:14:24.109 "raid_level": "raid1", 00:14:24.109 "superblock": true, 00:14:24.109 "num_base_bdevs": 2, 00:14:24.109 "num_base_bdevs_discovered": 2, 00:14:24.109 "num_base_bdevs_operational": 2, 00:14:24.109 "process": { 00:14:24.109 "type": "rebuild", 00:14:24.109 "target": "spare", 00:14:24.109 "progress": { 00:14:24.109 "blocks": 5632, 00:14:24.109 "percent": 70 00:14:24.109 } 00:14:24.109 }, 00:14:24.109 "base_bdevs_list": [ 00:14:24.109 { 00:14:24.109 "name": "spare", 00:14:24.109 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:24.109 "is_configured": true, 00:14:24.109 "data_offset": 256, 00:14:24.109 "data_size": 7936 00:14:24.109 }, 00:14:24.109 { 00:14:24.109 "name": "BaseBdev2", 00:14:24.109 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:24.109 "is_configured": true, 00:14:24.109 "data_offset": 256, 00:14:24.109 "data_size": 7936 00:14:24.109 } 00:14:24.109 ] 00:14:24.109 }' 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.109 09:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.042 [2024-10-30 09:49:03.519666] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:25.042 [2024-10-30 09:49:03.519730] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:25.042 [2024-10-30 09:49:03.519826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.300 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.300 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.300 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.300 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.301 "name": "raid_bdev1", 00:14:25.301 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:25.301 "strip_size_kb": 0, 00:14:25.301 "state": "online", 00:14:25.301 "raid_level": "raid1", 00:14:25.301 "superblock": true, 00:14:25.301 "num_base_bdevs": 2, 00:14:25.301 "num_base_bdevs_discovered": 2, 00:14:25.301 "num_base_bdevs_operational": 2, 00:14:25.301 "base_bdevs_list": [ 00:14:25.301 { 00:14:25.301 "name": "spare", 00:14:25.301 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:25.301 "is_configured": true, 00:14:25.301 "data_offset": 256, 00:14:25.301 "data_size": 7936 00:14:25.301 }, 00:14:25.301 { 00:14:25.301 "name": "BaseBdev2", 00:14:25.301 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:25.301 "is_configured": true, 00:14:25.301 "data_offset": 256, 00:14:25.301 "data_size": 7936 00:14:25.301 } 00:14:25.301 ] 00:14:25.301 }' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.301 "name": "raid_bdev1", 00:14:25.301 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:25.301 "strip_size_kb": 0, 00:14:25.301 "state": "online", 00:14:25.301 "raid_level": "raid1", 00:14:25.301 "superblock": true, 00:14:25.301 "num_base_bdevs": 2, 00:14:25.301 "num_base_bdevs_discovered": 2, 00:14:25.301 "num_base_bdevs_operational": 2, 00:14:25.301 "base_bdevs_list": [ 00:14:25.301 { 00:14:25.301 "name": "spare", 00:14:25.301 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:25.301 "is_configured": true, 00:14:25.301 "data_offset": 256, 00:14:25.301 "data_size": 7936 00:14:25.301 }, 00:14:25.301 { 00:14:25.301 "name": "BaseBdev2", 00:14:25.301 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:25.301 "is_configured": true, 00:14:25.301 "data_offset": 256, 00:14:25.301 "data_size": 7936 00:14:25.301 } 00:14:25.301 ] 00:14:25.301 }' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.301 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.560 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.560 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.560 "name": "raid_bdev1", 00:14:25.561 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:25.561 "strip_size_kb": 0, 00:14:25.561 "state": "online", 00:14:25.561 "raid_level": "raid1", 00:14:25.561 "superblock": true, 00:14:25.561 "num_base_bdevs": 2, 00:14:25.561 "num_base_bdevs_discovered": 2, 00:14:25.561 "num_base_bdevs_operational": 2, 00:14:25.561 "base_bdevs_list": [ 00:14:25.561 { 00:14:25.561 "name": "spare", 00:14:25.561 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:25.561 "is_configured": true, 00:14:25.561 "data_offset": 256, 00:14:25.561 "data_size": 7936 00:14:25.561 }, 00:14:25.561 { 00:14:25.561 "name": "BaseBdev2", 00:14:25.561 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:25.561 "is_configured": true, 00:14:25.561 "data_offset": 256, 00:14:25.561 "data_size": 7936 00:14:25.561 } 00:14:25.561 ] 00:14:25.561 }' 00:14:25.561 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.561 09:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.818 [2024-10-30 09:49:04.234658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.818 [2024-10-30 09:49:04.234682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.818 [2024-10-30 09:49:04.234738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.818 [2024-10-30 09:49:04.234791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.818 [2024-10-30 09:49:04.234801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.818 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:26.076 /dev/nbd0 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.076 1+0 records in 00:14:26.076 1+0 records out 00:14:26.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389074 s, 10.5 MB/s 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.076 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.333 /dev/nbd1 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.333 1+0 records in 00:14:26.333 1+0 records out 00:14:26.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197907 s, 20.7 MB/s 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.333 09:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.591 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:26.848 [2024-10-30 09:49:05.315672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.848 [2024-10-30 09:49:05.315727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.848 [2024-10-30 09:49:05.315746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:26.848 [2024-10-30 09:49:05.315754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.848 [2024-10-30 09:49:05.317640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.848 [2024-10-30 09:49:05.317674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.848 [2024-10-30 09:49:05.317755] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.848 [2024-10-30 09:49:05.317795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.848 [2024-10-30 09:49:05.317915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.848 spare 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:26.848 [2024-10-30 09:49:05.417998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:26.848 [2024-10-30 09:49:05.418036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:26.848 [2024-10-30 09:49:05.418297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:14:26.848 [2024-10-30 09:49:05.418447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:26.848 [2024-10-30 09:49:05.418459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:26.848 [2024-10-30 09:49:05.418596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.848 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.848 "name": "raid_bdev1", 00:14:26.848 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:26.848 "strip_size_kb": 0, 00:14:26.848 "state": "online", 00:14:26.848 "raid_level": "raid1", 00:14:26.848 "superblock": true, 00:14:26.848 "num_base_bdevs": 2, 00:14:26.848 "num_base_bdevs_discovered": 2, 00:14:26.848 "num_base_bdevs_operational": 2, 00:14:26.848 "base_bdevs_list": [ 00:14:26.848 { 00:14:26.848 "name": "spare", 00:14:26.848 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:26.848 "is_configured": true, 00:14:26.848 "data_offset": 256, 00:14:26.848 "data_size": 7936 00:14:26.848 }, 00:14:26.848 { 00:14:26.848 "name": "BaseBdev2", 00:14:26.849 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:26.849 "is_configured": true, 00:14:26.849 "data_offset": 256, 00:14:26.849 "data_size": 7936 00:14:26.849 } 00:14:26.849 ] 00:14:26.849 }' 00:14:26.849 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.849 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.413 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.413 "name": "raid_bdev1", 00:14:27.413 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:27.413 "strip_size_kb": 0, 00:14:27.413 "state": "online", 00:14:27.413 "raid_level": "raid1", 00:14:27.413 "superblock": true, 00:14:27.413 "num_base_bdevs": 2, 00:14:27.413 "num_base_bdevs_discovered": 2, 00:14:27.413 "num_base_bdevs_operational": 2, 00:14:27.413 "base_bdevs_list": [ 00:14:27.413 { 00:14:27.413 "name": "spare", 00:14:27.413 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:27.413 "is_configured": true, 00:14:27.414 "data_offset": 256, 00:14:27.414 "data_size": 7936 00:14:27.414 }, 00:14:27.414 { 00:14:27.414 "name": "BaseBdev2", 00:14:27.414 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:27.414 "is_configured": true, 00:14:27.414 "data_offset": 256, 00:14:27.414 "data_size": 7936 00:14:27.414 } 00:14:27.414 ] 00:14:27.414 }' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.414 [2024-10-30 09:49:05.895807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.414 "name": "raid_bdev1", 00:14:27.414 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:27.414 "strip_size_kb": 0, 00:14:27.414 "state": "online", 00:14:27.414 "raid_level": "raid1", 00:14:27.414 "superblock": true, 00:14:27.414 "num_base_bdevs": 2, 00:14:27.414 "num_base_bdevs_discovered": 1, 00:14:27.414 "num_base_bdevs_operational": 1, 00:14:27.414 "base_bdevs_list": [ 00:14:27.414 { 00:14:27.414 "name": null, 00:14:27.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.414 "is_configured": false, 00:14:27.414 "data_offset": 0, 00:14:27.414 "data_size": 7936 00:14:27.414 }, 00:14:27.414 { 00:14:27.414 "name": "BaseBdev2", 00:14:27.414 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:27.414 "is_configured": true, 00:14:27.414 "data_offset": 256, 00:14:27.414 "data_size": 7936 00:14:27.414 } 00:14:27.414 ] 00:14:27.414 }' 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.414 09:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.671 09:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.671 09:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.671 09:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:27.671 [2024-10-30 09:49:06.215899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.671 [2024-10-30 09:49:06.216052] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:27.671 [2024-10-30 09:49:06.216076] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:27.671 [2024-10-30 09:49:06.216102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.671 [2024-10-30 09:49:06.224774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:14:27.671 09:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.671 09:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:27.671 [2024-10-30 09:49:06.226331] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.043 "name": "raid_bdev1", 00:14:29.043 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:29.043 "strip_size_kb": 0, 00:14:29.043 "state": "online", 00:14:29.043 "raid_level": "raid1", 00:14:29.043 "superblock": true, 00:14:29.043 "num_base_bdevs": 2, 00:14:29.043 "num_base_bdevs_discovered": 2, 00:14:29.043 "num_base_bdevs_operational": 2, 00:14:29.043 "process": { 00:14:29.043 "type": "rebuild", 00:14:29.043 "target": "spare", 00:14:29.043 "progress": { 00:14:29.043 "blocks": 2560, 00:14:29.043 "percent": 32 00:14:29.043 } 00:14:29.043 }, 00:14:29.043 "base_bdevs_list": [ 00:14:29.043 { 00:14:29.043 "name": "spare", 00:14:29.043 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:29.043 "is_configured": true, 00:14:29.043 "data_offset": 256, 00:14:29.043 "data_size": 7936 00:14:29.043 }, 00:14:29.043 { 00:14:29.043 "name": "BaseBdev2", 00:14:29.043 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:29.043 "is_configured": true, 00:14:29.043 "data_offset": 256, 00:14:29.043 "data_size": 7936 00:14:29.043 } 00:14:29.043 ] 00:14:29.043 }' 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.043 [2024-10-30 09:49:07.332508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.043 [2024-10-30 09:49:07.431591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.043 [2024-10-30 09:49:07.431648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.043 [2024-10-30 09:49:07.431661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.043 [2024-10-30 09:49:07.431668] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.043 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.044 "name": "raid_bdev1", 00:14:29.044 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:29.044 "strip_size_kb": 0, 00:14:29.044 "state": "online", 00:14:29.044 "raid_level": "raid1", 00:14:29.044 "superblock": true, 00:14:29.044 "num_base_bdevs": 2, 00:14:29.044 "num_base_bdevs_discovered": 1, 00:14:29.044 "num_base_bdevs_operational": 1, 00:14:29.044 "base_bdevs_list": [ 00:14:29.044 { 00:14:29.044 "name": null, 00:14:29.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.044 "is_configured": false, 00:14:29.044 "data_offset": 0, 00:14:29.044 "data_size": 7936 00:14:29.044 }, 00:14:29.044 { 00:14:29.044 "name": "BaseBdev2", 00:14:29.044 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:29.044 "is_configured": true, 00:14:29.044 "data_offset": 256, 00:14:29.044 "data_size": 7936 00:14:29.044 } 00:14:29.044 ] 00:14:29.044 }' 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.044 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.302 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.302 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.302 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.302 [2024-10-30 09:49:07.770165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.302 [2024-10-30 09:49:07.770217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.302 [2024-10-30 09:49:07.770234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:29.302 [2024-10-30 09:49:07.770243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.302 [2024-10-30 09:49:07.770608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.302 [2024-10-30 09:49:07.770630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.302 [2024-10-30 09:49:07.770702] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.302 [2024-10-30 09:49:07.770713] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:29.302 [2024-10-30 09:49:07.770724] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:29.302 [2024-10-30 09:49:07.770741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.302 [2024-10-30 09:49:07.779287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:14:29.302 spare 00:14:29.302 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.302 09:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:29.302 [2024-10-30 09:49:07.780822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.235 "name": "raid_bdev1", 00:14:30.235 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:30.235 "strip_size_kb": 0, 00:14:30.235 "state": "online", 00:14:30.235 "raid_level": "raid1", 00:14:30.235 "superblock": true, 00:14:30.235 "num_base_bdevs": 2, 00:14:30.235 "num_base_bdevs_discovered": 2, 00:14:30.235 "num_base_bdevs_operational": 2, 00:14:30.235 "process": { 00:14:30.235 "type": "rebuild", 00:14:30.235 "target": "spare", 00:14:30.235 "progress": { 00:14:30.235 "blocks": 2560, 00:14:30.235 "percent": 32 00:14:30.235 } 00:14:30.235 }, 00:14:30.235 "base_bdevs_list": [ 00:14:30.235 { 00:14:30.235 "name": "spare", 00:14:30.235 "uuid": "c5962122-47dc-5751-88c4-c3f06e75c33c", 00:14:30.235 "is_configured": true, 00:14:30.235 "data_offset": 256, 00:14:30.235 "data_size": 7936 00:14:30.235 }, 00:14:30.235 { 00:14:30.235 "name": "BaseBdev2", 00:14:30.235 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:30.235 "is_configured": true, 00:14:30.235 "data_offset": 256, 00:14:30.235 "data_size": 7936 00:14:30.235 } 00:14:30.235 ] 00:14:30.235 }' 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.235 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.494 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.494 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:30.494 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.494 09:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 [2024-10-30 09:49:08.887068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.494 [2024-10-30 09:49:08.986035] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:30.494 [2024-10-30 09:49:08.986098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.494 [2024-10-30 09:49:08.986113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.494 [2024-10-30 09:49:08.986119] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.494 "name": "raid_bdev1", 00:14:30.494 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:30.494 "strip_size_kb": 0, 00:14:30.494 "state": "online", 00:14:30.494 "raid_level": "raid1", 00:14:30.494 "superblock": true, 00:14:30.494 "num_base_bdevs": 2, 00:14:30.494 "num_base_bdevs_discovered": 1, 00:14:30.494 "num_base_bdevs_operational": 1, 00:14:30.494 "base_bdevs_list": [ 00:14:30.494 { 00:14:30.494 "name": null, 00:14:30.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.494 "is_configured": false, 00:14:30.494 "data_offset": 0, 00:14:30.494 "data_size": 7936 00:14:30.494 }, 00:14:30.494 { 00:14:30.494 "name": "BaseBdev2", 00:14:30.494 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:30.494 "is_configured": true, 00:14:30.494 "data_offset": 256, 00:14:30.494 "data_size": 7936 00:14:30.494 } 00:14:30.494 ] 00:14:30.494 }' 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.494 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.752 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.752 "name": "raid_bdev1", 00:14:30.752 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:30.752 "strip_size_kb": 0, 00:14:30.752 "state": "online", 00:14:30.752 "raid_level": "raid1", 00:14:30.752 "superblock": true, 00:14:30.753 "num_base_bdevs": 2, 00:14:30.753 "num_base_bdevs_discovered": 1, 00:14:30.753 "num_base_bdevs_operational": 1, 00:14:30.753 "base_bdevs_list": [ 00:14:30.753 { 00:14:30.753 "name": null, 00:14:30.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.753 "is_configured": false, 00:14:30.753 "data_offset": 0, 00:14:30.753 "data_size": 7936 00:14:30.753 }, 00:14:30.753 { 00:14:30.753 "name": "BaseBdev2", 00:14:30.753 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:30.753 "is_configured": true, 00:14:30.753 "data_offset": 256, 00:14:30.753 "data_size": 7936 00:14:30.753 } 00:14:30.753 ] 00:14:30.753 }' 00:14:30.753 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:31.012 [2024-10-30 09:49:09.424602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:31.012 [2024-10-30 09:49:09.424649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.012 [2024-10-30 09:49:09.424666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:31.012 [2024-10-30 09:49:09.424675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.012 [2024-10-30 09:49:09.425035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.012 [2024-10-30 09:49:09.425066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:31.012 [2024-10-30 09:49:09.425131] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:31.012 [2024-10-30 09:49:09.425142] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:31.012 [2024-10-30 09:49:09.425149] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.012 [2024-10-30 09:49:09.425157] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:31.012 BaseBdev1 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.012 09:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.943 "name": "raid_bdev1", 00:14:31.943 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:31.943 "strip_size_kb": 0, 00:14:31.943 "state": "online", 00:14:31.943 "raid_level": "raid1", 00:14:31.943 "superblock": true, 00:14:31.943 "num_base_bdevs": 2, 00:14:31.943 "num_base_bdevs_discovered": 1, 00:14:31.943 "num_base_bdevs_operational": 1, 00:14:31.943 "base_bdevs_list": [ 00:14:31.943 { 00:14:31.943 "name": null, 00:14:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.943 "is_configured": false, 00:14:31.943 "data_offset": 0, 00:14:31.943 "data_size": 7936 00:14:31.943 }, 00:14:31.943 { 00:14:31.943 "name": "BaseBdev2", 00:14:31.943 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:31.943 "is_configured": true, 00:14:31.943 "data_offset": 256, 00:14:31.943 "data_size": 7936 00:14:31.943 } 00:14:31.943 ] 00:14:31.943 }' 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.943 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.201 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.201 "name": "raid_bdev1", 00:14:32.202 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:32.202 "strip_size_kb": 0, 00:14:32.202 "state": "online", 00:14:32.202 "raid_level": "raid1", 00:14:32.202 "superblock": true, 00:14:32.202 "num_base_bdevs": 2, 00:14:32.202 "num_base_bdevs_discovered": 1, 00:14:32.202 "num_base_bdevs_operational": 1, 00:14:32.202 "base_bdevs_list": [ 00:14:32.202 { 00:14:32.202 "name": null, 00:14:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.202 "is_configured": false, 00:14:32.202 "data_offset": 0, 00:14:32.202 "data_size": 7936 00:14:32.202 }, 00:14:32.202 { 00:14:32.202 "name": "BaseBdev2", 00:14:32.202 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:32.202 "is_configured": true, 00:14:32.202 "data_offset": 256, 00:14:32.202 "data_size": 7936 00:14:32.202 } 00:14:32.202 ] 00:14:32.202 }' 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.202 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:32.459 [2024-10-30 09:49:10.824898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.459 [2024-10-30 09:49:10.825029] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:32.459 [2024-10-30 09:49:10.825041] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:32.459 request: 00:14:32.459 { 00:14:32.459 "base_bdev": "BaseBdev1", 00:14:32.459 "raid_bdev": "raid_bdev1", 00:14:32.459 "method": "bdev_raid_add_base_bdev", 00:14:32.459 "req_id": 1 00:14:32.459 } 00:14:32.459 Got JSON-RPC error response 00:14:32.459 response: 00:14:32.459 { 00:14:32.459 "code": -22, 00:14:32.459 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:32.459 } 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.459 09:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.392 "name": "raid_bdev1", 00:14:33.392 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:33.392 "strip_size_kb": 0, 00:14:33.392 "state": "online", 00:14:33.392 "raid_level": "raid1", 00:14:33.392 "superblock": true, 00:14:33.392 "num_base_bdevs": 2, 00:14:33.392 "num_base_bdevs_discovered": 1, 00:14:33.392 "num_base_bdevs_operational": 1, 00:14:33.392 "base_bdevs_list": [ 00:14:33.392 { 00:14:33.392 "name": null, 00:14:33.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.392 "is_configured": false, 00:14:33.392 "data_offset": 0, 00:14:33.392 "data_size": 7936 00:14:33.392 }, 00:14:33.392 { 00:14:33.392 "name": "BaseBdev2", 00:14:33.392 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:33.392 "is_configured": true, 00:14:33.392 "data_offset": 256, 00:14:33.392 "data_size": 7936 00:14:33.392 } 00:14:33.392 ] 00:14:33.392 }' 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.392 09:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.651 "name": "raid_bdev1", 00:14:33.651 "uuid": "6b83ac14-8818-49e0-ac19-c548ffb08fa9", 00:14:33.651 "strip_size_kb": 0, 00:14:33.651 "state": "online", 00:14:33.651 "raid_level": "raid1", 00:14:33.651 "superblock": true, 00:14:33.651 "num_base_bdevs": 2, 00:14:33.651 "num_base_bdevs_discovered": 1, 00:14:33.651 "num_base_bdevs_operational": 1, 00:14:33.651 "base_bdevs_list": [ 00:14:33.651 { 00:14:33.651 "name": null, 00:14:33.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.651 "is_configured": false, 00:14:33.651 "data_offset": 0, 00:14:33.651 "data_size": 7936 00:14:33.651 }, 00:14:33.651 { 00:14:33.651 "name": "BaseBdev2", 00:14:33.651 "uuid": "7f1a2d37-008d-5703-b3d8-06c6965f39c7", 00:14:33.651 "is_configured": true, 00:14:33.651 "data_offset": 256, 00:14:33.651 "data_size": 7936 00:14:33.651 } 00:14:33.651 ] 00:14:33.651 }' 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.651 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 84053 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 84053 ']' 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 84053 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84053 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:33.909 killing process with pid 84053 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84053' 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 84053 00:14:33.909 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.909 00:14:33.909 Latency(us) 00:14:33.909 [2024-10-30T09:49:12.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.909 [2024-10-30T09:49:12.529Z] =================================================================================================================== 00:14:33.909 [2024-10-30T09:49:12.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.909 [2024-10-30 09:49:12.318776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.909 09:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 84053 00:14:33.909 [2024-10-30 09:49:12.318874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.909 [2024-10-30 09:49:12.318912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.909 [2024-10-30 09:49:12.318921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:33.909 [2024-10-30 09:49:12.467834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.477 09:49:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.477 00:14:34.477 real 0m17.058s 00:14:34.477 user 0m21.657s 00:14:34.477 sys 0m1.926s 00:14:34.477 09:49:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.477 09:49:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:34.477 ************************************ 00:14:34.477 END TEST raid_rebuild_test_sb_4k 00:14:34.477 ************************************ 00:14:34.478 09:49:13 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:14:34.478 09:49:13 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:14:34.478 09:49:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:34.478 09:49:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.478 09:49:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.478 ************************************ 00:14:34.478 START TEST raid_state_function_test_sb_md_separate 00:14:34.478 ************************************ 00:14:34.478 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:14:34.478 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:34.478 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:34.478 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:34.478 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84721 00:14:34.737 Process raid pid: 84721 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84721' 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84721 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84721 ']' 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:34.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.737 09:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:34.737 [2024-10-30 09:49:13.161635] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:34.737 [2024-10-30 09:49:13.161754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.737 [2024-10-30 09:49:13.316360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.995 [2024-10-30 09:49:13.398140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.995 [2024-10-30 09:49:13.508641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.995 [2024-10-30 09:49:13.508667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.568 [2024-10-30 09:49:14.054954] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.568 [2024-10-30 09:49:14.054998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.568 [2024-10-30 09:49:14.055007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.568 [2024-10-30 09:49:14.055015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.568 "name": "Existed_Raid", 00:14:35.568 "uuid": "edca058d-ba06-488e-b097-56432a017ec9", 00:14:35.568 "strip_size_kb": 0, 00:14:35.568 "state": "configuring", 00:14:35.568 "raid_level": "raid1", 00:14:35.568 "superblock": true, 00:14:35.568 "num_base_bdevs": 2, 00:14:35.568 "num_base_bdevs_discovered": 0, 00:14:35.568 "num_base_bdevs_operational": 2, 00:14:35.568 "base_bdevs_list": [ 00:14:35.568 { 00:14:35.568 "name": "BaseBdev1", 00:14:35.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.568 "is_configured": false, 00:14:35.568 "data_offset": 0, 00:14:35.568 "data_size": 0 00:14:35.568 }, 00:14:35.568 { 00:14:35.568 "name": "BaseBdev2", 00:14:35.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.568 "is_configured": false, 00:14:35.568 "data_offset": 0, 00:14:35.568 "data_size": 0 00:14:35.568 } 00:14:35.568 ] 00:14:35.568 }' 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.568 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 [2024-10-30 09:49:14.382978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.832 [2024-10-30 09:49:14.383010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 [2024-10-30 09:49:14.390978] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.832 [2024-10-30 09:49:14.391007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.832 [2024-10-30 09:49:14.391014] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.832 [2024-10-30 09:49:14.391023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 [2024-10-30 09:49:14.419446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.832 BaseBdev1 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.832 [ 00:14:35.832 { 00:14:35.832 "name": "BaseBdev1", 00:14:35.832 "aliases": [ 00:14:35.832 "76a9405a-018b-4472-ab88-0c44ac0e0274" 00:14:35.832 ], 00:14:35.832 "product_name": "Malloc disk", 00:14:35.832 "block_size": 4096, 00:14:35.832 "num_blocks": 8192, 00:14:35.832 "uuid": "76a9405a-018b-4472-ab88-0c44ac0e0274", 00:14:35.832 "md_size": 32, 00:14:35.832 "md_interleave": false, 00:14:35.832 "dif_type": 0, 00:14:35.832 "assigned_rate_limits": { 00:14:35.832 "rw_ios_per_sec": 0, 00:14:35.832 "rw_mbytes_per_sec": 0, 00:14:35.832 "r_mbytes_per_sec": 0, 00:14:35.832 "w_mbytes_per_sec": 0 00:14:35.832 }, 00:14:35.832 "claimed": true, 00:14:35.832 "claim_type": "exclusive_write", 00:14:35.832 "zoned": false, 00:14:35.832 "supported_io_types": { 00:14:35.832 "read": true, 00:14:35.832 "write": true, 00:14:35.832 "unmap": true, 00:14:35.832 "flush": true, 00:14:35.832 "reset": true, 00:14:35.832 "nvme_admin": false, 00:14:35.832 "nvme_io": false, 00:14:35.832 "nvme_io_md": false, 00:14:35.832 "write_zeroes": true, 00:14:35.832 "zcopy": true, 00:14:35.832 "get_zone_info": false, 00:14:35.832 "zone_management": false, 00:14:35.832 "zone_append": false, 00:14:35.832 "compare": false, 00:14:35.832 "compare_and_write": false, 00:14:35.832 "abort": true, 00:14:35.832 "seek_hole": false, 00:14:35.832 "seek_data": false, 00:14:35.832 "copy": true, 00:14:35.832 "nvme_iov_md": false 00:14:35.832 }, 00:14:35.832 "memory_domains": [ 00:14:35.832 { 00:14:35.832 "dma_device_id": "system", 00:14:35.832 "dma_device_type": 1 00:14:35.832 }, 00:14:35.832 { 00:14:35.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.832 "dma_device_type": 2 00:14:35.832 } 00:14:35.832 ], 00:14:35.832 "driver_specific": {} 00:14:35.832 } 00:14:35.832 ] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.832 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.090 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.090 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.090 "name": "Existed_Raid", 00:14:36.090 "uuid": "4110020a-d6e5-4ecf-8ab0-90db74b2950c", 00:14:36.090 "strip_size_kb": 0, 00:14:36.090 "state": "configuring", 00:14:36.090 "raid_level": "raid1", 00:14:36.090 "superblock": true, 00:14:36.090 "num_base_bdevs": 2, 00:14:36.090 "num_base_bdevs_discovered": 1, 00:14:36.090 "num_base_bdevs_operational": 2, 00:14:36.090 "base_bdevs_list": [ 00:14:36.090 { 00:14:36.090 "name": "BaseBdev1", 00:14:36.090 "uuid": "76a9405a-018b-4472-ab88-0c44ac0e0274", 00:14:36.090 "is_configured": true, 00:14:36.090 "data_offset": 256, 00:14:36.090 "data_size": 7936 00:14:36.090 }, 00:14:36.090 { 00:14:36.090 "name": "BaseBdev2", 00:14:36.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.090 "is_configured": false, 00:14:36.090 "data_offset": 0, 00:14:36.090 "data_size": 0 00:14:36.090 } 00:14:36.090 ] 00:14:36.090 }' 00:14:36.090 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.090 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.349 [2024-10-30 09:49:14.759567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.349 [2024-10-30 09:49:14.759611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.349 [2024-10-30 09:49:14.767601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.349 [2024-10-30 09:49:14.769109] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.349 [2024-10-30 09:49:14.769145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.349 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.349 "name": "Existed_Raid", 00:14:36.349 "uuid": "ecf19a94-3e5d-4eda-8e64-a030e5170183", 00:14:36.349 "strip_size_kb": 0, 00:14:36.349 "state": "configuring", 00:14:36.349 "raid_level": "raid1", 00:14:36.350 "superblock": true, 00:14:36.350 "num_base_bdevs": 2, 00:14:36.350 "num_base_bdevs_discovered": 1, 00:14:36.350 "num_base_bdevs_operational": 2, 00:14:36.350 "base_bdevs_list": [ 00:14:36.350 { 00:14:36.350 "name": "BaseBdev1", 00:14:36.350 "uuid": "76a9405a-018b-4472-ab88-0c44ac0e0274", 00:14:36.350 "is_configured": true, 00:14:36.350 "data_offset": 256, 00:14:36.350 "data_size": 7936 00:14:36.350 }, 00:14:36.350 { 00:14:36.350 "name": "BaseBdev2", 00:14:36.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.350 "is_configured": false, 00:14:36.350 "data_offset": 0, 00:14:36.350 "data_size": 0 00:14:36.350 } 00:14:36.350 ] 00:14:36.350 }' 00:14:36.350 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.350 09:49:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.608 [2024-10-30 09:49:15.094457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.608 [2024-10-30 09:49:15.094623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:36.608 [2024-10-30 09:49:15.094633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:36.608 [2024-10-30 09:49:15.094695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:36.608 [2024-10-30 09:49:15.094782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:36.608 [2024-10-30 09:49:15.094790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:36.608 BaseBdev2 00:14:36.608 [2024-10-30 09:49:15.094855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:36.608 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.609 [ 00:14:36.609 { 00:14:36.609 "name": "BaseBdev2", 00:14:36.609 "aliases": [ 00:14:36.609 "02a24efa-6f62-4dee-a091-29c2f75af062" 00:14:36.609 ], 00:14:36.609 "product_name": "Malloc disk", 00:14:36.609 "block_size": 4096, 00:14:36.609 "num_blocks": 8192, 00:14:36.609 "uuid": "02a24efa-6f62-4dee-a091-29c2f75af062", 00:14:36.609 "md_size": 32, 00:14:36.609 "md_interleave": false, 00:14:36.609 "dif_type": 0, 00:14:36.609 "assigned_rate_limits": { 00:14:36.609 "rw_ios_per_sec": 0, 00:14:36.609 "rw_mbytes_per_sec": 0, 00:14:36.609 "r_mbytes_per_sec": 0, 00:14:36.609 "w_mbytes_per_sec": 0 00:14:36.609 }, 00:14:36.609 "claimed": true, 00:14:36.609 "claim_type": "exclusive_write", 00:14:36.609 "zoned": false, 00:14:36.609 "supported_io_types": { 00:14:36.609 "read": true, 00:14:36.609 "write": true, 00:14:36.609 "unmap": true, 00:14:36.609 "flush": true, 00:14:36.609 "reset": true, 00:14:36.609 "nvme_admin": false, 00:14:36.609 "nvme_io": false, 00:14:36.609 "nvme_io_md": false, 00:14:36.609 "write_zeroes": true, 00:14:36.609 "zcopy": true, 00:14:36.609 "get_zone_info": false, 00:14:36.609 "zone_management": false, 00:14:36.609 "zone_append": false, 00:14:36.609 "compare": false, 00:14:36.609 "compare_and_write": false, 00:14:36.609 "abort": true, 00:14:36.609 "seek_hole": false, 00:14:36.609 "seek_data": false, 00:14:36.609 "copy": true, 00:14:36.609 "nvme_iov_md": false 00:14:36.609 }, 00:14:36.609 "memory_domains": [ 00:14:36.609 { 00:14:36.609 "dma_device_id": "system", 00:14:36.609 "dma_device_type": 1 00:14:36.609 }, 00:14:36.609 { 00:14:36.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.609 "dma_device_type": 2 00:14:36.609 } 00:14:36.609 ], 00:14:36.609 "driver_specific": {} 00:14:36.609 } 00:14:36.609 ] 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.609 "name": "Existed_Raid", 00:14:36.609 "uuid": "ecf19a94-3e5d-4eda-8e64-a030e5170183", 00:14:36.609 "strip_size_kb": 0, 00:14:36.609 "state": "online", 00:14:36.609 "raid_level": "raid1", 00:14:36.609 "superblock": true, 00:14:36.609 "num_base_bdevs": 2, 00:14:36.609 "num_base_bdevs_discovered": 2, 00:14:36.609 "num_base_bdevs_operational": 2, 00:14:36.609 "base_bdevs_list": [ 00:14:36.609 { 00:14:36.609 "name": "BaseBdev1", 00:14:36.609 "uuid": "76a9405a-018b-4472-ab88-0c44ac0e0274", 00:14:36.609 "is_configured": true, 00:14:36.609 "data_offset": 256, 00:14:36.609 "data_size": 7936 00:14:36.609 }, 00:14:36.609 { 00:14:36.609 "name": "BaseBdev2", 00:14:36.609 "uuid": "02a24efa-6f62-4dee-a091-29c2f75af062", 00:14:36.609 "is_configured": true, 00:14:36.609 "data_offset": 256, 00:14:36.609 "data_size": 7936 00:14:36.609 } 00:14:36.609 ] 00:14:36.609 }' 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.609 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.868 [2024-10-30 09:49:15.426836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.868 "name": "Existed_Raid", 00:14:36.868 "aliases": [ 00:14:36.868 "ecf19a94-3e5d-4eda-8e64-a030e5170183" 00:14:36.868 ], 00:14:36.868 "product_name": "Raid Volume", 00:14:36.868 "block_size": 4096, 00:14:36.868 "num_blocks": 7936, 00:14:36.868 "uuid": "ecf19a94-3e5d-4eda-8e64-a030e5170183", 00:14:36.868 "md_size": 32, 00:14:36.868 "md_interleave": false, 00:14:36.868 "dif_type": 0, 00:14:36.868 "assigned_rate_limits": { 00:14:36.868 "rw_ios_per_sec": 0, 00:14:36.868 "rw_mbytes_per_sec": 0, 00:14:36.868 "r_mbytes_per_sec": 0, 00:14:36.868 "w_mbytes_per_sec": 0 00:14:36.868 }, 00:14:36.868 "claimed": false, 00:14:36.868 "zoned": false, 00:14:36.868 "supported_io_types": { 00:14:36.868 "read": true, 00:14:36.868 "write": true, 00:14:36.868 "unmap": false, 00:14:36.868 "flush": false, 00:14:36.868 "reset": true, 00:14:36.868 "nvme_admin": false, 00:14:36.868 "nvme_io": false, 00:14:36.868 "nvme_io_md": false, 00:14:36.868 "write_zeroes": true, 00:14:36.868 "zcopy": false, 00:14:36.868 "get_zone_info": false, 00:14:36.868 "zone_management": false, 00:14:36.868 "zone_append": false, 00:14:36.868 "compare": false, 00:14:36.868 "compare_and_write": false, 00:14:36.868 "abort": false, 00:14:36.868 "seek_hole": false, 00:14:36.868 "seek_data": false, 00:14:36.868 "copy": false, 00:14:36.868 "nvme_iov_md": false 00:14:36.868 }, 00:14:36.868 "memory_domains": [ 00:14:36.868 { 00:14:36.868 "dma_device_id": "system", 00:14:36.868 "dma_device_type": 1 00:14:36.868 }, 00:14:36.868 { 00:14:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.868 "dma_device_type": 2 00:14:36.868 }, 00:14:36.868 { 00:14:36.868 "dma_device_id": "system", 00:14:36.868 "dma_device_type": 1 00:14:36.868 }, 00:14:36.868 { 00:14:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.868 "dma_device_type": 2 00:14:36.868 } 00:14:36.868 ], 00:14:36.868 "driver_specific": { 00:14:36.868 "raid": { 00:14:36.868 "uuid": "ecf19a94-3e5d-4eda-8e64-a030e5170183", 00:14:36.868 "strip_size_kb": 0, 00:14:36.868 "state": "online", 00:14:36.868 "raid_level": "raid1", 00:14:36.868 "superblock": true, 00:14:36.868 "num_base_bdevs": 2, 00:14:36.868 "num_base_bdevs_discovered": 2, 00:14:36.868 "num_base_bdevs_operational": 2, 00:14:36.868 "base_bdevs_list": [ 00:14:36.868 { 00:14:36.868 "name": "BaseBdev1", 00:14:36.868 "uuid": "76a9405a-018b-4472-ab88-0c44ac0e0274", 00:14:36.868 "is_configured": true, 00:14:36.868 "data_offset": 256, 00:14:36.868 "data_size": 7936 00:14:36.868 }, 00:14:36.868 { 00:14:36.868 "name": "BaseBdev2", 00:14:36.868 "uuid": "02a24efa-6f62-4dee-a091-29c2f75af062", 00:14:36.868 "is_configured": true, 00:14:36.868 "data_offset": 256, 00:14:36.868 "data_size": 7936 00:14:36.868 } 00:14:36.868 ] 00:14:36.868 } 00:14:36.868 } 00:14:36.868 }' 00:14:36.868 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:37.127 BaseBdev2' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.127 [2024-10-30 09:49:15.598626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.127 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.128 "name": "Existed_Raid", 00:14:37.128 "uuid": "ecf19a94-3e5d-4eda-8e64-a030e5170183", 00:14:37.128 "strip_size_kb": 0, 00:14:37.128 "state": "online", 00:14:37.128 "raid_level": "raid1", 00:14:37.128 "superblock": true, 00:14:37.128 "num_base_bdevs": 2, 00:14:37.128 "num_base_bdevs_discovered": 1, 00:14:37.128 "num_base_bdevs_operational": 1, 00:14:37.128 "base_bdevs_list": [ 00:14:37.128 { 00:14:37.128 "name": null, 00:14:37.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.128 "is_configured": false, 00:14:37.128 "data_offset": 0, 00:14:37.128 "data_size": 7936 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev2", 00:14:37.128 "uuid": "02a24efa-6f62-4dee-a091-29c2f75af062", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 256, 00:14:37.128 "data_size": 7936 00:14:37.128 } 00:14:37.128 ] 00:14:37.128 }' 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.128 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.386 09:49:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.386 [2024-10-30 09:49:16.001541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.386 [2024-10-30 09:49:16.001624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.644 [2024-10-30 09:49:16.052873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.644 [2024-10-30 09:49:16.052913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.644 [2024-10-30 09:49:16.052922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84721 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84721 ']' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 84721 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84721 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:37.644 killing process with pid 84721 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84721' 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 84721 00:14:37.644 [2024-10-30 09:49:16.112663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.644 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 84721 00:14:37.644 [2024-10-30 09:49:16.121044] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.210 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:14:38.210 00:14:38.210 real 0m3.596s 00:14:38.210 user 0m5.281s 00:14:38.210 sys 0m0.581s 00:14:38.210 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:38.210 ************************************ 00:14:38.210 END TEST raid_state_function_test_sb_md_separate 00:14:38.210 09:49:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 ************************************ 00:14:38.210 09:49:16 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:14:38.210 09:49:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:38.210 09:49:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:38.210 09:49:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 ************************************ 00:14:38.210 START TEST raid_superblock_test_md_separate 00:14:38.210 ************************************ 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84951 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84951 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84951 ']' 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.210 09:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 [2024-10-30 09:49:16.796212] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:38.210 [2024-10-30 09:49:16.796324] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84951 ] 00:14:38.468 [2024-10-30 09:49:16.945193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.468 [2024-10-30 09:49:17.026314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.727 [2024-10-30 09:49:17.134856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.727 [2024-10-30 09:49:17.134888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.987 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 malloc1 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 [2024-10-30 09:49:17.632388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.246 [2024-10-30 09:49:17.632433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.246 [2024-10-30 09:49:17.632449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:39.246 [2024-10-30 09:49:17.632457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.246 [2024-10-30 09:49:17.634015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.246 [2024-10-30 09:49:17.634046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.246 pt1 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 malloc2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 [2024-10-30 09:49:17.664288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.246 [2024-10-30 09:49:17.664323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.246 [2024-10-30 09:49:17.664338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:39.246 [2024-10-30 09:49:17.664345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.246 [2024-10-30 09:49:17.665891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.246 [2024-10-30 09:49:17.665917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.246 pt2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 [2024-10-30 09:49:17.672315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.246 [2024-10-30 09:49:17.673861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.246 [2024-10-30 09:49:17.674005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:39.246 [2024-10-30 09:49:17.674023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:39.246 [2024-10-30 09:49:17.674094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:39.246 [2024-10-30 09:49:17.674189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:39.246 [2024-10-30 09:49:17.674206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:39.246 [2024-10-30 09:49:17.674284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.246 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.246 "name": "raid_bdev1", 00:14:39.246 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:39.246 "strip_size_kb": 0, 00:14:39.246 "state": "online", 00:14:39.246 "raid_level": "raid1", 00:14:39.246 "superblock": true, 00:14:39.246 "num_base_bdevs": 2, 00:14:39.246 "num_base_bdevs_discovered": 2, 00:14:39.246 "num_base_bdevs_operational": 2, 00:14:39.246 "base_bdevs_list": [ 00:14:39.246 { 00:14:39.246 "name": "pt1", 00:14:39.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.246 "is_configured": true, 00:14:39.246 "data_offset": 256, 00:14:39.246 "data_size": 7936 00:14:39.246 }, 00:14:39.246 { 00:14:39.246 "name": "pt2", 00:14:39.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.246 "is_configured": true, 00:14:39.246 "data_offset": 256, 00:14:39.246 "data_size": 7936 00:14:39.247 } 00:14:39.247 ] 00:14:39.247 }' 00:14:39.247 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.247 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.504 09:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.504 [2024-10-30 09:49:17.992601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.504 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.504 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.504 "name": "raid_bdev1", 00:14:39.504 "aliases": [ 00:14:39.504 "e473d973-307c-45fc-bac2-ebf8719e3ddd" 00:14:39.504 ], 00:14:39.504 "product_name": "Raid Volume", 00:14:39.504 "block_size": 4096, 00:14:39.504 "num_blocks": 7936, 00:14:39.505 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:39.505 "md_size": 32, 00:14:39.505 "md_interleave": false, 00:14:39.505 "dif_type": 0, 00:14:39.505 "assigned_rate_limits": { 00:14:39.505 "rw_ios_per_sec": 0, 00:14:39.505 "rw_mbytes_per_sec": 0, 00:14:39.505 "r_mbytes_per_sec": 0, 00:14:39.505 "w_mbytes_per_sec": 0 00:14:39.505 }, 00:14:39.505 "claimed": false, 00:14:39.505 "zoned": false, 00:14:39.505 "supported_io_types": { 00:14:39.505 "read": true, 00:14:39.505 "write": true, 00:14:39.505 "unmap": false, 00:14:39.505 "flush": false, 00:14:39.505 "reset": true, 00:14:39.505 "nvme_admin": false, 00:14:39.505 "nvme_io": false, 00:14:39.505 "nvme_io_md": false, 00:14:39.505 "write_zeroes": true, 00:14:39.505 "zcopy": false, 00:14:39.505 "get_zone_info": false, 00:14:39.505 "zone_management": false, 00:14:39.505 "zone_append": false, 00:14:39.505 "compare": false, 00:14:39.505 "compare_and_write": false, 00:14:39.505 "abort": false, 00:14:39.505 "seek_hole": false, 00:14:39.505 "seek_data": false, 00:14:39.505 "copy": false, 00:14:39.505 "nvme_iov_md": false 00:14:39.505 }, 00:14:39.505 "memory_domains": [ 00:14:39.505 { 00:14:39.505 "dma_device_id": "system", 00:14:39.505 "dma_device_type": 1 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.505 "dma_device_type": 2 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "dma_device_id": "system", 00:14:39.505 "dma_device_type": 1 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.505 "dma_device_type": 2 00:14:39.505 } 00:14:39.505 ], 00:14:39.505 "driver_specific": { 00:14:39.505 "raid": { 00:14:39.505 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:39.505 "strip_size_kb": 0, 00:14:39.505 "state": "online", 00:14:39.505 "raid_level": "raid1", 00:14:39.505 "superblock": true, 00:14:39.505 "num_base_bdevs": 2, 00:14:39.505 "num_base_bdevs_discovered": 2, 00:14:39.505 "num_base_bdevs_operational": 2, 00:14:39.505 "base_bdevs_list": [ 00:14:39.505 { 00:14:39.505 "name": "pt1", 00:14:39.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.505 "is_configured": true, 00:14:39.505 "data_offset": 256, 00:14:39.505 "data_size": 7936 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "name": "pt2", 00:14:39.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.505 "is_configured": true, 00:14:39.505 "data_offset": 256, 00:14:39.505 "data_size": 7936 00:14:39.505 } 00:14:39.505 ] 00:14:39.505 } 00:14:39.505 } 00:14:39.505 }' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.505 pt2' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.505 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:39.763 [2024-10-30 09:49:18.136582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e473d973-307c-45fc-bac2-ebf8719e3ddd 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e473d973-307c-45fc-bac2-ebf8719e3ddd ']' 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.763 [2024-10-30 09:49:18.168357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.763 [2024-10-30 09:49:18.168379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.763 [2024-10-30 09:49:18.168437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.763 [2024-10-30 09:49:18.168484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.763 [2024-10-30 09:49:18.168493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.763 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 [2024-10-30 09:49:18.264390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:39.764 [2024-10-30 09:49:18.265912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:39.764 [2024-10-30 09:49:18.265971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:39.764 [2024-10-30 09:49:18.266014] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:39.764 [2024-10-30 09:49:18.266025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.764 [2024-10-30 09:49:18.266034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:39.764 request: 00:14:39.764 { 00:14:39.764 "name": "raid_bdev1", 00:14:39.764 "raid_level": "raid1", 00:14:39.764 "base_bdevs": [ 00:14:39.764 "malloc1", 00:14:39.764 "malloc2" 00:14:39.764 ], 00:14:39.764 "superblock": false, 00:14:39.764 "method": "bdev_raid_create", 00:14:39.764 "req_id": 1 00:14:39.764 } 00:14:39.764 Got JSON-RPC error response 00:14:39.764 response: 00:14:39.764 { 00:14:39.764 "code": -17, 00:14:39.764 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:39.764 } 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 [2024-10-30 09:49:18.308382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.764 [2024-10-30 09:49:18.308421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.764 [2024-10-30 09:49:18.308432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:39.764 [2024-10-30 09:49:18.308440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.764 [2024-10-30 09:49:18.309994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.764 [2024-10-30 09:49:18.310026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.764 [2024-10-30 09:49:18.310072] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:39.764 [2024-10-30 09:49:18.310113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.764 pt1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.764 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.764 "name": "raid_bdev1", 00:14:39.764 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:39.764 "strip_size_kb": 0, 00:14:39.764 "state": "configuring", 00:14:39.764 "raid_level": "raid1", 00:14:39.764 "superblock": true, 00:14:39.764 "num_base_bdevs": 2, 00:14:39.764 "num_base_bdevs_discovered": 1, 00:14:39.764 "num_base_bdevs_operational": 2, 00:14:39.764 "base_bdevs_list": [ 00:14:39.764 { 00:14:39.764 "name": "pt1", 00:14:39.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.765 "is_configured": true, 00:14:39.765 "data_offset": 256, 00:14:39.765 "data_size": 7936 00:14:39.765 }, 00:14:39.765 { 00:14:39.765 "name": null, 00:14:39.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.765 "is_configured": false, 00:14:39.765 "data_offset": 256, 00:14:39.765 "data_size": 7936 00:14:39.765 } 00:14:39.765 ] 00:14:39.765 }' 00:14:39.765 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.765 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.023 [2024-10-30 09:49:18.628460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.023 [2024-10-30 09:49:18.628521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.023 [2024-10-30 09:49:18.628535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:40.023 [2024-10-30 09:49:18.628543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.023 [2024-10-30 09:49:18.628711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.023 [2024-10-30 09:49:18.628723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.023 [2024-10-30 09:49:18.628761] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.023 [2024-10-30 09:49:18.628777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.023 [2024-10-30 09:49:18.628858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:40.023 [2024-10-30 09:49:18.628867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:40.023 [2024-10-30 09:49:18.628919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.023 [2024-10-30 09:49:18.629016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:40.023 [2024-10-30 09:49:18.629023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:40.023 [2024-10-30 09:49:18.629109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.023 pt2 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.023 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.281 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.281 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.281 "name": "raid_bdev1", 00:14:40.281 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:40.281 "strip_size_kb": 0, 00:14:40.281 "state": "online", 00:14:40.281 "raid_level": "raid1", 00:14:40.281 "superblock": true, 00:14:40.281 "num_base_bdevs": 2, 00:14:40.281 "num_base_bdevs_discovered": 2, 00:14:40.281 "num_base_bdevs_operational": 2, 00:14:40.281 "base_bdevs_list": [ 00:14:40.281 { 00:14:40.281 "name": "pt1", 00:14:40.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.281 "is_configured": true, 00:14:40.281 "data_offset": 256, 00:14:40.281 "data_size": 7936 00:14:40.281 }, 00:14:40.281 { 00:14:40.281 "name": "pt2", 00:14:40.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.281 "is_configured": true, 00:14:40.281 "data_offset": 256, 00:14:40.281 "data_size": 7936 00:14:40.281 } 00:14:40.281 ] 00:14:40.281 }' 00:14:40.281 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.281 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.540 [2024-10-30 09:49:18.952752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.540 "name": "raid_bdev1", 00:14:40.540 "aliases": [ 00:14:40.540 "e473d973-307c-45fc-bac2-ebf8719e3ddd" 00:14:40.540 ], 00:14:40.540 "product_name": "Raid Volume", 00:14:40.540 "block_size": 4096, 00:14:40.540 "num_blocks": 7936, 00:14:40.540 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:40.540 "md_size": 32, 00:14:40.540 "md_interleave": false, 00:14:40.540 "dif_type": 0, 00:14:40.540 "assigned_rate_limits": { 00:14:40.540 "rw_ios_per_sec": 0, 00:14:40.540 "rw_mbytes_per_sec": 0, 00:14:40.540 "r_mbytes_per_sec": 0, 00:14:40.540 "w_mbytes_per_sec": 0 00:14:40.540 }, 00:14:40.540 "claimed": false, 00:14:40.540 "zoned": false, 00:14:40.540 "supported_io_types": { 00:14:40.540 "read": true, 00:14:40.540 "write": true, 00:14:40.540 "unmap": false, 00:14:40.540 "flush": false, 00:14:40.540 "reset": true, 00:14:40.540 "nvme_admin": false, 00:14:40.540 "nvme_io": false, 00:14:40.540 "nvme_io_md": false, 00:14:40.540 "write_zeroes": true, 00:14:40.540 "zcopy": false, 00:14:40.540 "get_zone_info": false, 00:14:40.540 "zone_management": false, 00:14:40.540 "zone_append": false, 00:14:40.540 "compare": false, 00:14:40.540 "compare_and_write": false, 00:14:40.540 "abort": false, 00:14:40.540 "seek_hole": false, 00:14:40.540 "seek_data": false, 00:14:40.540 "copy": false, 00:14:40.540 "nvme_iov_md": false 00:14:40.540 }, 00:14:40.540 "memory_domains": [ 00:14:40.540 { 00:14:40.540 "dma_device_id": "system", 00:14:40.540 "dma_device_type": 1 00:14:40.540 }, 00:14:40.540 { 00:14:40.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.540 "dma_device_type": 2 00:14:40.540 }, 00:14:40.540 { 00:14:40.540 "dma_device_id": "system", 00:14:40.540 "dma_device_type": 1 00:14:40.540 }, 00:14:40.540 { 00:14:40.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.540 "dma_device_type": 2 00:14:40.540 } 00:14:40.540 ], 00:14:40.540 "driver_specific": { 00:14:40.540 "raid": { 00:14:40.540 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:40.540 "strip_size_kb": 0, 00:14:40.540 "state": "online", 00:14:40.540 "raid_level": "raid1", 00:14:40.540 "superblock": true, 00:14:40.540 "num_base_bdevs": 2, 00:14:40.540 "num_base_bdevs_discovered": 2, 00:14:40.540 "num_base_bdevs_operational": 2, 00:14:40.540 "base_bdevs_list": [ 00:14:40.540 { 00:14:40.540 "name": "pt1", 00:14:40.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.540 "is_configured": true, 00:14:40.540 "data_offset": 256, 00:14:40.540 "data_size": 7936 00:14:40.540 }, 00:14:40.540 { 00:14:40.540 "name": "pt2", 00:14:40.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.540 "is_configured": true, 00:14:40.540 "data_offset": 256, 00:14:40.540 "data_size": 7936 00:14:40.540 } 00:14:40.540 ] 00:14:40.540 } 00:14:40.540 } 00:14:40.540 }' 00:14:40.540 09:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.540 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:40.540 pt2' 00:14:40.540 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.540 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:40.540 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.541 [2024-10-30 09:49:19.120784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e473d973-307c-45fc-bac2-ebf8719e3ddd '!=' e473d973-307c-45fc-bac2-ebf8719e3ddd ']' 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.541 [2024-10-30 09:49:19.148581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.541 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.798 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.799 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.799 "name": "raid_bdev1", 00:14:40.799 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:40.799 "strip_size_kb": 0, 00:14:40.799 "state": "online", 00:14:40.799 "raid_level": "raid1", 00:14:40.799 "superblock": true, 00:14:40.799 "num_base_bdevs": 2, 00:14:40.799 "num_base_bdevs_discovered": 1, 00:14:40.799 "num_base_bdevs_operational": 1, 00:14:40.799 "base_bdevs_list": [ 00:14:40.799 { 00:14:40.799 "name": null, 00:14:40.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.799 "is_configured": false, 00:14:40.799 "data_offset": 0, 00:14:40.799 "data_size": 7936 00:14:40.799 }, 00:14:40.799 { 00:14:40.799 "name": "pt2", 00:14:40.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.799 "is_configured": true, 00:14:40.799 "data_offset": 256, 00:14:40.799 "data_size": 7936 00:14:40.799 } 00:14:40.799 ] 00:14:40.799 }' 00:14:40.799 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.799 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.057 [2024-10-30 09:49:19.468618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.057 [2024-10-30 09:49:19.468639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.057 [2024-10-30 09:49:19.468693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.057 [2024-10-30 09:49:19.468729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.057 [2024-10-30 09:49:19.468738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.057 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.057 [2024-10-30 09:49:19.516628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.057 [2024-10-30 09:49:19.516674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.057 [2024-10-30 09:49:19.516686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:41.057 [2024-10-30 09:49:19.516694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.057 [2024-10-30 09:49:19.518344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.057 [2024-10-30 09:49:19.518378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.058 [2024-10-30 09:49:19.518417] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.058 [2024-10-30 09:49:19.518450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.058 [2024-10-30 09:49:19.518516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:41.058 [2024-10-30 09:49:19.518525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:41.058 [2024-10-30 09:49:19.518578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:41.058 [2024-10-30 09:49:19.518653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:41.058 [2024-10-30 09:49:19.518659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:41.058 [2024-10-30 09:49:19.518726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.058 pt2 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.058 "name": "raid_bdev1", 00:14:41.058 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:41.058 "strip_size_kb": 0, 00:14:41.058 "state": "online", 00:14:41.058 "raid_level": "raid1", 00:14:41.058 "superblock": true, 00:14:41.058 "num_base_bdevs": 2, 00:14:41.058 "num_base_bdevs_discovered": 1, 00:14:41.058 "num_base_bdevs_operational": 1, 00:14:41.058 "base_bdevs_list": [ 00:14:41.058 { 00:14:41.058 "name": null, 00:14:41.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.058 "is_configured": false, 00:14:41.058 "data_offset": 256, 00:14:41.058 "data_size": 7936 00:14:41.058 }, 00:14:41.058 { 00:14:41.058 "name": "pt2", 00:14:41.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.058 "is_configured": true, 00:14:41.058 "data_offset": 256, 00:14:41.058 "data_size": 7936 00:14:41.058 } 00:14:41.058 ] 00:14:41.058 }' 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.058 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.316 [2024-10-30 09:49:19.824665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.316 [2024-10-30 09:49:19.824786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.316 [2024-10-30 09:49:19.824851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.316 [2024-10-30 09:49:19.824891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.316 [2024-10-30 09:49:19.824898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.316 [2024-10-30 09:49:19.864691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.316 [2024-10-30 09:49:19.864736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.316 [2024-10-30 09:49:19.864750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:41.316 [2024-10-30 09:49:19.864757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.316 [2024-10-30 09:49:19.866428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.316 [2024-10-30 09:49:19.866457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.316 [2024-10-30 09:49:19.866499] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.316 [2024-10-30 09:49:19.866529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.316 [2024-10-30 09:49:19.866624] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.316 [2024-10-30 09:49:19.866631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.316 [2024-10-30 09:49:19.866645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:41.316 [2024-10-30 09:49:19.866687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.316 [2024-10-30 09:49:19.866736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:41.316 [2024-10-30 09:49:19.866743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:41.316 [2024-10-30 09:49:19.866798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:41.316 [2024-10-30 09:49:19.866871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:41.316 [2024-10-30 09:49:19.866884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:41.316 [2024-10-30 09:49:19.866962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.316 pt1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.316 "name": "raid_bdev1", 00:14:41.316 "uuid": "e473d973-307c-45fc-bac2-ebf8719e3ddd", 00:14:41.316 "strip_size_kb": 0, 00:14:41.316 "state": "online", 00:14:41.316 "raid_level": "raid1", 00:14:41.316 "superblock": true, 00:14:41.316 "num_base_bdevs": 2, 00:14:41.316 "num_base_bdevs_discovered": 1, 00:14:41.316 "num_base_bdevs_operational": 1, 00:14:41.316 "base_bdevs_list": [ 00:14:41.316 { 00:14:41.316 "name": null, 00:14:41.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.316 "is_configured": false, 00:14:41.316 "data_offset": 256, 00:14:41.316 "data_size": 7936 00:14:41.316 }, 00:14:41.316 { 00:14:41.316 "name": "pt2", 00:14:41.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.316 "is_configured": true, 00:14:41.316 "data_offset": 256, 00:14:41.316 "data_size": 7936 00:14:41.316 } 00:14:41.316 ] 00:14:41.316 }' 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.316 09:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.574 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:41.574 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.574 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:41.574 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.575 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:41.834 [2024-10-30 09:49:20.212969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e473d973-307c-45fc-bac2-ebf8719e3ddd '!=' e473d973-307c-45fc-bac2-ebf8719e3ddd ']' 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84951 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84951 ']' 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 84951 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84951 00:14:41.834 killing process with pid 84951 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84951' 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 84951 00:14:41.834 [2024-10-30 09:49:20.269649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.834 [2024-10-30 09:49:20.269712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.834 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 84951 00:14:41.834 [2024-10-30 09:49:20.269751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.834 [2024-10-30 09:49:20.269765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:41.834 [2024-10-30 09:49:20.379729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.400 09:49:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:14:42.400 00:14:42.400 real 0m4.199s 00:14:42.400 user 0m6.489s 00:14:42.400 sys 0m0.643s 00:14:42.400 ************************************ 00:14:42.400 END TEST raid_superblock_test_md_separate 00:14:42.400 ************************************ 00:14:42.400 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.400 09:49:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:42.400 09:49:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:14:42.400 09:49:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:14:42.400 09:49:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:42.400 09:49:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.400 09:49:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.400 ************************************ 00:14:42.400 START TEST raid_rebuild_test_sb_md_separate 00:14:42.400 ************************************ 00:14:42.400 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:14:42.400 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:42.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85259 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85259 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 85259 ']' 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:42.401 09:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:42.659 [2024-10-30 09:49:21.047527] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:42.659 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:42.659 Zero copy mechanism will not be used. 00:14:42.659 [2024-10-30 09:49:21.047774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85259 ] 00:14:42.659 [2024-10-30 09:49:21.201770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.916 [2024-10-30 09:49:21.286825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.916 [2024-10-30 09:49:21.399386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.916 [2024-10-30 09:49:21.399414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 BaseBdev1_malloc 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.483 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 [2024-10-30 09:49:21.917661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.484 [2024-10-30 09:49:21.917826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.484 [2024-10-30 09:49:21.917850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:43.484 [2024-10-30 09:49:21.917860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.484 [2024-10-30 09:49:21.919470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.484 [2024-10-30 09:49:21.919498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.484 BaseBdev1 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 BaseBdev2_malloc 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 [2024-10-30 09:49:21.949547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:43.484 [2024-10-30 09:49:21.949591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.484 [2024-10-30 09:49:21.949605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:43.484 [2024-10-30 09:49:21.949613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.484 [2024-10-30 09:49:21.951156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.484 [2024-10-30 09:49:21.951184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:43.484 BaseBdev2 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 spare_malloc 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 spare_delay 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 [2024-10-30 09:49:22.004789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.484 [2024-10-30 09:49:22.004838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.484 [2024-10-30 09:49:22.004853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:43.484 [2024-10-30 09:49:22.004862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.484 [2024-10-30 09:49:22.006457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.484 [2024-10-30 09:49:22.006490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.484 spare 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 [2024-10-30 09:49:22.012830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.484 [2024-10-30 09:49:22.014340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.484 [2024-10-30 09:49:22.014478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:43.484 [2024-10-30 09:49:22.014489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:43.484 [2024-10-30 09:49:22.014556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:43.484 [2024-10-30 09:49:22.014651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:43.484 [2024-10-30 09:49:22.014658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:43.484 [2024-10-30 09:49:22.014739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.484 "name": "raid_bdev1", 00:14:43.484 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:43.484 "strip_size_kb": 0, 00:14:43.484 "state": "online", 00:14:43.484 "raid_level": "raid1", 00:14:43.484 "superblock": true, 00:14:43.484 "num_base_bdevs": 2, 00:14:43.484 "num_base_bdevs_discovered": 2, 00:14:43.484 "num_base_bdevs_operational": 2, 00:14:43.484 "base_bdevs_list": [ 00:14:43.484 { 00:14:43.484 "name": "BaseBdev1", 00:14:43.484 "uuid": "7bfa21a0-7621-5b03-be57-affd1d5f7ac7", 00:14:43.484 "is_configured": true, 00:14:43.484 "data_offset": 256, 00:14:43.484 "data_size": 7936 00:14:43.484 }, 00:14:43.484 { 00:14:43.484 "name": "BaseBdev2", 00:14:43.484 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:43.484 "is_configured": true, 00:14:43.484 "data_offset": 256, 00:14:43.484 "data_size": 7936 00:14:43.484 } 00:14:43.484 ] 00:14:43.484 }' 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.484 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:43.743 [2024-10-30 09:49:22.333143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.743 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:43.999 [2024-10-30 09:49:22.568976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:43.999 /dev/nbd0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.999 1+0 records in 00:14:43.999 1+0 records out 00:14:43.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194524 s, 21.1 MB/s 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.999 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:44.000 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:44.000 09:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:44.931 7936+0 records in 00:14:44.931 7936+0 records out 00:14:44.931 32505856 bytes (33 MB, 31 MiB) copied, 0.673525 s, 48.3 MB/s 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.931 [2024-10-30 09:49:23.501054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:44.931 [2024-10-30 09:49:23.509148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.931 "name": "raid_bdev1", 00:14:44.931 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:44.931 "strip_size_kb": 0, 00:14:44.931 "state": "online", 00:14:44.931 "raid_level": "raid1", 00:14:44.931 "superblock": true, 00:14:44.931 "num_base_bdevs": 2, 00:14:44.931 "num_base_bdevs_discovered": 1, 00:14:44.931 "num_base_bdevs_operational": 1, 00:14:44.931 "base_bdevs_list": [ 00:14:44.931 { 00:14:44.931 "name": null, 00:14:44.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.931 "is_configured": false, 00:14:44.931 "data_offset": 0, 00:14:44.931 "data_size": 7936 00:14:44.931 }, 00:14:44.931 { 00:14:44.931 "name": "BaseBdev2", 00:14:44.931 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:44.931 "is_configured": true, 00:14:44.931 "data_offset": 256, 00:14:44.931 "data_size": 7936 00:14:44.931 } 00:14:44.931 ] 00:14:44.931 }' 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.931 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.189 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.189 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.189 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.189 [2024-10-30 09:49:23.805192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.447 [2024-10-30 09:49:23.813075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:14:45.447 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.447 09:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:45.447 [2024-10-30 09:49:23.814579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.380 "name": "raid_bdev1", 00:14:46.380 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:46.380 "strip_size_kb": 0, 00:14:46.380 "state": "online", 00:14:46.380 "raid_level": "raid1", 00:14:46.380 "superblock": true, 00:14:46.380 "num_base_bdevs": 2, 00:14:46.380 "num_base_bdevs_discovered": 2, 00:14:46.380 "num_base_bdevs_operational": 2, 00:14:46.380 "process": { 00:14:46.380 "type": "rebuild", 00:14:46.380 "target": "spare", 00:14:46.380 "progress": { 00:14:46.380 "blocks": 2560, 00:14:46.380 "percent": 32 00:14:46.380 } 00:14:46.380 }, 00:14:46.380 "base_bdevs_list": [ 00:14:46.380 { 00:14:46.380 "name": "spare", 00:14:46.380 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:46.380 "is_configured": true, 00:14:46.380 "data_offset": 256, 00:14:46.380 "data_size": 7936 00:14:46.380 }, 00:14:46.380 { 00:14:46.380 "name": "BaseBdev2", 00:14:46.380 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:46.380 "is_configured": true, 00:14:46.380 "data_offset": 256, 00:14:46.380 "data_size": 7936 00:14:46.380 } 00:14:46.380 ] 00:14:46.380 }' 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.380 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.381 [2024-10-30 09:49:24.917111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.381 [2024-10-30 09:49:24.919313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.381 [2024-10-30 09:49:24.919364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.381 [2024-10-30 09:49:24.919376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.381 [2024-10-30 09:49:24.919384] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.381 "name": "raid_bdev1", 00:14:46.381 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:46.381 "strip_size_kb": 0, 00:14:46.381 "state": "online", 00:14:46.381 "raid_level": "raid1", 00:14:46.381 "superblock": true, 00:14:46.381 "num_base_bdevs": 2, 00:14:46.381 "num_base_bdevs_discovered": 1, 00:14:46.381 "num_base_bdevs_operational": 1, 00:14:46.381 "base_bdevs_list": [ 00:14:46.381 { 00:14:46.381 "name": null, 00:14:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.381 "is_configured": false, 00:14:46.381 "data_offset": 0, 00:14:46.381 "data_size": 7936 00:14:46.381 }, 00:14:46.381 { 00:14:46.381 "name": "BaseBdev2", 00:14:46.381 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:46.381 "is_configured": true, 00:14:46.381 "data_offset": 256, 00:14:46.381 "data_size": 7936 00:14:46.381 } 00:14:46.381 ] 00:14:46.381 }' 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.381 09:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.640 "name": "raid_bdev1", 00:14:46.640 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:46.640 "strip_size_kb": 0, 00:14:46.640 "state": "online", 00:14:46.640 "raid_level": "raid1", 00:14:46.640 "superblock": true, 00:14:46.640 "num_base_bdevs": 2, 00:14:46.640 "num_base_bdevs_discovered": 1, 00:14:46.640 "num_base_bdevs_operational": 1, 00:14:46.640 "base_bdevs_list": [ 00:14:46.640 { 00:14:46.640 "name": null, 00:14:46.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.640 "is_configured": false, 00:14:46.640 "data_offset": 0, 00:14:46.640 "data_size": 7936 00:14:46.640 }, 00:14:46.640 { 00:14:46.640 "name": "BaseBdev2", 00:14:46.640 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:46.640 "is_configured": true, 00:14:46.640 "data_offset": 256, 00:14:46.640 "data_size": 7936 00:14:46.640 } 00:14:46.640 ] 00:14:46.640 }' 00:14:46.640 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 [2024-10-30 09:49:25.323579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.898 [2024-10-30 09:49:25.331113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.898 09:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:46.898 [2024-10-30 09:49:25.332629] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.840 "name": "raid_bdev1", 00:14:47.840 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:47.840 "strip_size_kb": 0, 00:14:47.840 "state": "online", 00:14:47.840 "raid_level": "raid1", 00:14:47.840 "superblock": true, 00:14:47.840 "num_base_bdevs": 2, 00:14:47.840 "num_base_bdevs_discovered": 2, 00:14:47.840 "num_base_bdevs_operational": 2, 00:14:47.840 "process": { 00:14:47.840 "type": "rebuild", 00:14:47.840 "target": "spare", 00:14:47.840 "progress": { 00:14:47.840 "blocks": 2560, 00:14:47.840 "percent": 32 00:14:47.840 } 00:14:47.840 }, 00:14:47.840 "base_bdevs_list": [ 00:14:47.840 { 00:14:47.840 "name": "spare", 00:14:47.840 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:47.840 "is_configured": true, 00:14:47.840 "data_offset": 256, 00:14:47.840 "data_size": 7936 00:14:47.840 }, 00:14:47.840 { 00:14:47.840 "name": "BaseBdev2", 00:14:47.840 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:47.840 "is_configured": true, 00:14:47.840 "data_offset": 256, 00:14:47.840 "data_size": 7936 00:14:47.840 } 00:14:47.840 ] 00:14:47.840 }' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:47.840 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=561 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.840 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.111 "name": "raid_bdev1", 00:14:48.111 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:48.111 "strip_size_kb": 0, 00:14:48.111 "state": "online", 00:14:48.111 "raid_level": "raid1", 00:14:48.111 "superblock": true, 00:14:48.111 "num_base_bdevs": 2, 00:14:48.111 "num_base_bdevs_discovered": 2, 00:14:48.111 "num_base_bdevs_operational": 2, 00:14:48.111 "process": { 00:14:48.111 "type": "rebuild", 00:14:48.111 "target": "spare", 00:14:48.111 "progress": { 00:14:48.111 "blocks": 2816, 00:14:48.111 "percent": 35 00:14:48.111 } 00:14:48.111 }, 00:14:48.111 "base_bdevs_list": [ 00:14:48.111 { 00:14:48.111 "name": "spare", 00:14:48.111 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:48.111 "is_configured": true, 00:14:48.111 "data_offset": 256, 00:14:48.111 "data_size": 7936 00:14:48.111 }, 00:14:48.111 { 00:14:48.111 "name": "BaseBdev2", 00:14:48.111 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:48.111 "is_configured": true, 00:14:48.111 "data_offset": 256, 00:14:48.111 "data_size": 7936 00:14:48.111 } 00:14:48.111 ] 00:14:48.111 }' 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.111 09:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.044 "name": "raid_bdev1", 00:14:49.044 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:49.044 "strip_size_kb": 0, 00:14:49.044 "state": "online", 00:14:49.044 "raid_level": "raid1", 00:14:49.044 "superblock": true, 00:14:49.044 "num_base_bdevs": 2, 00:14:49.044 "num_base_bdevs_discovered": 2, 00:14:49.044 "num_base_bdevs_operational": 2, 00:14:49.044 "process": { 00:14:49.044 "type": "rebuild", 00:14:49.044 "target": "spare", 00:14:49.044 "progress": { 00:14:49.044 "blocks": 5632, 00:14:49.044 "percent": 70 00:14:49.044 } 00:14:49.044 }, 00:14:49.044 "base_bdevs_list": [ 00:14:49.044 { 00:14:49.044 "name": "spare", 00:14:49.044 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:49.044 "is_configured": true, 00:14:49.044 "data_offset": 256, 00:14:49.044 "data_size": 7936 00:14:49.044 }, 00:14:49.044 { 00:14:49.044 "name": "BaseBdev2", 00:14:49.044 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:49.044 "is_configured": true, 00:14:49.044 "data_offset": 256, 00:14:49.044 "data_size": 7936 00:14:49.044 } 00:14:49.044 ] 00:14:49.044 }' 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.044 09:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.976 [2024-10-30 09:49:28.445608] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.976 [2024-10-30 09:49:28.445677] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.976 [2024-10-30 09:49:28.445761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.234 "name": "raid_bdev1", 00:14:50.234 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:50.234 "strip_size_kb": 0, 00:14:50.234 "state": "online", 00:14:50.234 "raid_level": "raid1", 00:14:50.234 "superblock": true, 00:14:50.234 "num_base_bdevs": 2, 00:14:50.234 "num_base_bdevs_discovered": 2, 00:14:50.234 "num_base_bdevs_operational": 2, 00:14:50.234 "base_bdevs_list": [ 00:14:50.234 { 00:14:50.234 "name": "spare", 00:14:50.234 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:50.234 "is_configured": true, 00:14:50.234 "data_offset": 256, 00:14:50.234 "data_size": 7936 00:14:50.234 }, 00:14:50.234 { 00:14:50.234 "name": "BaseBdev2", 00:14:50.234 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:50.234 "is_configured": true, 00:14:50.234 "data_offset": 256, 00:14:50.234 "data_size": 7936 00:14:50.234 } 00:14:50.234 ] 00:14:50.234 }' 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.234 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.235 "name": "raid_bdev1", 00:14:50.235 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:50.235 "strip_size_kb": 0, 00:14:50.235 "state": "online", 00:14:50.235 "raid_level": "raid1", 00:14:50.235 "superblock": true, 00:14:50.235 "num_base_bdevs": 2, 00:14:50.235 "num_base_bdevs_discovered": 2, 00:14:50.235 "num_base_bdevs_operational": 2, 00:14:50.235 "base_bdevs_list": [ 00:14:50.235 { 00:14:50.235 "name": "spare", 00:14:50.235 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:50.235 "is_configured": true, 00:14:50.235 "data_offset": 256, 00:14:50.235 "data_size": 7936 00:14:50.235 }, 00:14:50.235 { 00:14:50.235 "name": "BaseBdev2", 00:14:50.235 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:50.235 "is_configured": true, 00:14:50.235 "data_offset": 256, 00:14:50.235 "data_size": 7936 00:14:50.235 } 00:14:50.235 ] 00:14:50.235 }' 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.235 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.494 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.494 "name": "raid_bdev1", 00:14:50.494 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:50.494 "strip_size_kb": 0, 00:14:50.494 "state": "online", 00:14:50.494 "raid_level": "raid1", 00:14:50.494 "superblock": true, 00:14:50.494 "num_base_bdevs": 2, 00:14:50.494 "num_base_bdevs_discovered": 2, 00:14:50.494 "num_base_bdevs_operational": 2, 00:14:50.494 "base_bdevs_list": [ 00:14:50.494 { 00:14:50.494 "name": "spare", 00:14:50.494 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:50.494 "is_configured": true, 00:14:50.494 "data_offset": 256, 00:14:50.494 "data_size": 7936 00:14:50.494 }, 00:14:50.494 { 00:14:50.494 "name": "BaseBdev2", 00:14:50.494 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:50.494 "is_configured": true, 00:14:50.494 "data_offset": 256, 00:14:50.494 "data_size": 7936 00:14:50.494 } 00:14:50.494 ] 00:14:50.494 }' 00:14:50.494 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.494 09:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.753 [2024-10-30 09:49:29.145970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.753 [2024-10-30 09:49:29.145999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.753 [2024-10-30 09:49:29.146072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.753 [2024-10-30 09:49:29.146130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.753 [2024-10-30 09:49:29.146138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.753 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:51.011 /dev/nbd0 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.011 1+0 records in 00:14:51.011 1+0 records out 00:14:51.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190866 s, 21.5 MB/s 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:51.011 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:51.011 /dev/nbd1 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.269 1+0 records in 00:14:51.269 1+0 records out 00:14:51.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281001 s, 14.6 MB/s 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.269 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.527 09:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:51.784 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:51.784 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:51.784 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:51.784 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.784 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 [2024-10-30 09:49:30.205486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.785 [2024-10-30 09:49:30.205543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.785 [2024-10-30 09:49:30.205561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:51.785 [2024-10-30 09:49:30.205569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.785 [2024-10-30 09:49:30.207223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.785 [2024-10-30 09:49:30.207255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.785 [2024-10-30 09:49:30.207305] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:51.785 [2024-10-30 09:49:30.207347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.785 [2024-10-30 09:49:30.207446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.785 spare 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 [2024-10-30 09:49:30.307509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:51.785 [2024-10-30 09:49:30.307537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:51.785 [2024-10-30 09:49:30.307623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:14:51.785 [2024-10-30 09:49:30.307738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:51.785 [2024-10-30 09:49:30.307745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:51.785 [2024-10-30 09:49:30.307840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.785 "name": "raid_bdev1", 00:14:51.785 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:51.785 "strip_size_kb": 0, 00:14:51.785 "state": "online", 00:14:51.785 "raid_level": "raid1", 00:14:51.785 "superblock": true, 00:14:51.785 "num_base_bdevs": 2, 00:14:51.785 "num_base_bdevs_discovered": 2, 00:14:51.785 "num_base_bdevs_operational": 2, 00:14:51.785 "base_bdevs_list": [ 00:14:51.785 { 00:14:51.785 "name": "spare", 00:14:51.785 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:51.785 "is_configured": true, 00:14:51.785 "data_offset": 256, 00:14:51.785 "data_size": 7936 00:14:51.785 }, 00:14:51.785 { 00:14:51.785 "name": "BaseBdev2", 00:14:51.785 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:51.785 "is_configured": true, 00:14:51.785 "data_offset": 256, 00:14:51.785 "data_size": 7936 00:14:51.785 } 00:14:51.785 ] 00:14:51.785 }' 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.785 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.043 "name": "raid_bdev1", 00:14:52.043 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:52.043 "strip_size_kb": 0, 00:14:52.043 "state": "online", 00:14:52.043 "raid_level": "raid1", 00:14:52.043 "superblock": true, 00:14:52.043 "num_base_bdevs": 2, 00:14:52.043 "num_base_bdevs_discovered": 2, 00:14:52.043 "num_base_bdevs_operational": 2, 00:14:52.043 "base_bdevs_list": [ 00:14:52.043 { 00:14:52.043 "name": "spare", 00:14:52.043 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:52.043 "is_configured": true, 00:14:52.043 "data_offset": 256, 00:14:52.043 "data_size": 7936 00:14:52.043 }, 00:14:52.043 { 00:14:52.043 "name": "BaseBdev2", 00:14:52.043 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:52.043 "is_configured": true, 00:14:52.043 "data_offset": 256, 00:14:52.043 "data_size": 7936 00:14:52.043 } 00:14:52.043 ] 00:14:52.043 }' 00:14:52.043 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.301 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.302 [2024-10-30 09:49:30.757605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.302 "name": "raid_bdev1", 00:14:52.302 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:52.302 "strip_size_kb": 0, 00:14:52.302 "state": "online", 00:14:52.302 "raid_level": "raid1", 00:14:52.302 "superblock": true, 00:14:52.302 "num_base_bdevs": 2, 00:14:52.302 "num_base_bdevs_discovered": 1, 00:14:52.302 "num_base_bdevs_operational": 1, 00:14:52.302 "base_bdevs_list": [ 00:14:52.302 { 00:14:52.302 "name": null, 00:14:52.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.302 "is_configured": false, 00:14:52.302 "data_offset": 0, 00:14:52.302 "data_size": 7936 00:14:52.302 }, 00:14:52.302 { 00:14:52.302 "name": "BaseBdev2", 00:14:52.302 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:52.302 "is_configured": true, 00:14:52.302 "data_offset": 256, 00:14:52.302 "data_size": 7936 00:14:52.302 } 00:14:52.302 ] 00:14:52.302 }' 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.302 09:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.560 09:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.560 09:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.560 09:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.560 [2024-10-30 09:49:31.085691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.560 [2024-10-30 09:49:31.085839] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.560 [2024-10-30 09:49:31.085852] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:52.560 [2024-10-30 09:49:31.085880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.560 [2024-10-30 09:49:31.093302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:14:52.560 09:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.560 09:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:52.560 [2024-10-30 09:49:31.094838] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.495 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.754 "name": "raid_bdev1", 00:14:53.754 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:53.754 "strip_size_kb": 0, 00:14:53.754 "state": "online", 00:14:53.754 "raid_level": "raid1", 00:14:53.754 "superblock": true, 00:14:53.754 "num_base_bdevs": 2, 00:14:53.754 "num_base_bdevs_discovered": 2, 00:14:53.754 "num_base_bdevs_operational": 2, 00:14:53.754 "process": { 00:14:53.754 "type": "rebuild", 00:14:53.754 "target": "spare", 00:14:53.754 "progress": { 00:14:53.754 "blocks": 2560, 00:14:53.754 "percent": 32 00:14:53.754 } 00:14:53.754 }, 00:14:53.754 "base_bdevs_list": [ 00:14:53.754 { 00:14:53.754 "name": "spare", 00:14:53.754 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:53.754 "is_configured": true, 00:14:53.754 "data_offset": 256, 00:14:53.754 "data_size": 7936 00:14:53.754 }, 00:14:53.754 { 00:14:53.754 "name": "BaseBdev2", 00:14:53.754 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:53.754 "is_configured": true, 00:14:53.754 "data_offset": 256, 00:14:53.754 "data_size": 7936 00:14:53.754 } 00:14:53.754 ] 00:14:53.754 }' 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:53.754 [2024-10-30 09:49:32.197373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.754 [2024-10-30 09:49:32.199528] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.754 [2024-10-30 09:49:32.199581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.754 [2024-10-30 09:49:32.199593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.754 [2024-10-30 09:49:32.199601] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.754 "name": "raid_bdev1", 00:14:53.754 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:53.754 "strip_size_kb": 0, 00:14:53.754 "state": "online", 00:14:53.754 "raid_level": "raid1", 00:14:53.754 "superblock": true, 00:14:53.754 "num_base_bdevs": 2, 00:14:53.754 "num_base_bdevs_discovered": 1, 00:14:53.754 "num_base_bdevs_operational": 1, 00:14:53.754 "base_bdevs_list": [ 00:14:53.754 { 00:14:53.754 "name": null, 00:14:53.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.754 "is_configured": false, 00:14:53.754 "data_offset": 0, 00:14:53.754 "data_size": 7936 00:14:53.754 }, 00:14:53.754 { 00:14:53.754 "name": "BaseBdev2", 00:14:53.754 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:53.754 "is_configured": true, 00:14:53.754 "data_offset": 256, 00:14:53.754 "data_size": 7936 00:14:53.754 } 00:14:53.754 ] 00:14:53.754 }' 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.754 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:54.013 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.013 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.013 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:54.013 [2024-10-30 09:49:32.515641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.013 [2024-10-30 09:49:32.515693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.013 [2024-10-30 09:49:32.515713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:54.013 [2024-10-30 09:49:32.515724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.013 [2024-10-30 09:49:32.515907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.013 [2024-10-30 09:49:32.515919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.013 [2024-10-30 09:49:32.515964] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:54.013 [2024-10-30 09:49:32.515975] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:54.013 [2024-10-30 09:49:32.515983] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:54.013 [2024-10-30 09:49:32.515999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.013 [2024-10-30 09:49:32.523250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:14:54.013 spare 00:14:54.013 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.013 09:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:54.013 [2024-10-30 09:49:32.524771] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.945 "name": "raid_bdev1", 00:14:54.945 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:54.945 "strip_size_kb": 0, 00:14:54.945 "state": "online", 00:14:54.945 "raid_level": "raid1", 00:14:54.945 "superblock": true, 00:14:54.945 "num_base_bdevs": 2, 00:14:54.945 "num_base_bdevs_discovered": 2, 00:14:54.945 "num_base_bdevs_operational": 2, 00:14:54.945 "process": { 00:14:54.945 "type": "rebuild", 00:14:54.945 "target": "spare", 00:14:54.945 "progress": { 00:14:54.945 "blocks": 2560, 00:14:54.945 "percent": 32 00:14:54.945 } 00:14:54.945 }, 00:14:54.945 "base_bdevs_list": [ 00:14:54.945 { 00:14:54.945 "name": "spare", 00:14:54.945 "uuid": "be775202-23ee-5491-a76d-a2dde781d28c", 00:14:54.945 "is_configured": true, 00:14:54.945 "data_offset": 256, 00:14:54.945 "data_size": 7936 00:14:54.945 }, 00:14:54.945 { 00:14:54.945 "name": "BaseBdev2", 00:14:54.945 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:54.945 "is_configured": true, 00:14:54.945 "data_offset": 256, 00:14:54.945 "data_size": 7936 00:14:54.945 } 00:14:54.945 ] 00:14:54.945 }' 00:14:54.945 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.202 [2024-10-30 09:49:33.623341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.202 [2024-10-30 09:49:33.629642] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.202 [2024-10-30 09:49:33.629794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.202 [2024-10-30 09:49:33.629813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.202 [2024-10-30 09:49:33.629820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.202 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.202 "name": "raid_bdev1", 00:14:55.202 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:55.202 "strip_size_kb": 0, 00:14:55.202 "state": "online", 00:14:55.202 "raid_level": "raid1", 00:14:55.202 "superblock": true, 00:14:55.202 "num_base_bdevs": 2, 00:14:55.202 "num_base_bdevs_discovered": 1, 00:14:55.202 "num_base_bdevs_operational": 1, 00:14:55.203 "base_bdevs_list": [ 00:14:55.203 { 00:14:55.203 "name": null, 00:14:55.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.203 "is_configured": false, 00:14:55.203 "data_offset": 0, 00:14:55.203 "data_size": 7936 00:14:55.203 }, 00:14:55.203 { 00:14:55.203 "name": "BaseBdev2", 00:14:55.203 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:55.203 "is_configured": true, 00:14:55.203 "data_offset": 256, 00:14:55.203 "data_size": 7936 00:14:55.203 } 00:14:55.203 ] 00:14:55.203 }' 00:14:55.203 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.203 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.464 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.464 "name": "raid_bdev1", 00:14:55.464 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:55.464 "strip_size_kb": 0, 00:14:55.464 "state": "online", 00:14:55.464 "raid_level": "raid1", 00:14:55.464 "superblock": true, 00:14:55.464 "num_base_bdevs": 2, 00:14:55.464 "num_base_bdevs_discovered": 1, 00:14:55.464 "num_base_bdevs_operational": 1, 00:14:55.464 "base_bdevs_list": [ 00:14:55.464 { 00:14:55.464 "name": null, 00:14:55.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.464 "is_configured": false, 00:14:55.464 "data_offset": 0, 00:14:55.464 "data_size": 7936 00:14:55.464 }, 00:14:55.464 { 00:14:55.464 "name": "BaseBdev2", 00:14:55.464 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:55.464 "is_configured": true, 00:14:55.464 "data_offset": 256, 00:14:55.464 "data_size": 7936 00:14:55.464 } 00:14:55.464 ] 00:14:55.464 }' 00:14:55.465 09:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:55.465 [2024-10-30 09:49:34.069972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.465 [2024-10-30 09:49:34.070067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.465 [2024-10-30 09:49:34.070097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:55.465 [2024-10-30 09:49:34.070107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.465 [2024-10-30 09:49:34.070369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.465 [2024-10-30 09:49:34.070383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.465 [2024-10-30 09:49:34.070442] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:55.465 [2024-10-30 09:49:34.070460] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.465 [2024-10-30 09:49:34.070471] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:55.465 [2024-10-30 09:49:34.070483] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:55.465 BaseBdev1 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.465 09:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.892 "name": "raid_bdev1", 00:14:56.892 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:56.892 "strip_size_kb": 0, 00:14:56.892 "state": "online", 00:14:56.892 "raid_level": "raid1", 00:14:56.892 "superblock": true, 00:14:56.892 "num_base_bdevs": 2, 00:14:56.892 "num_base_bdevs_discovered": 1, 00:14:56.892 "num_base_bdevs_operational": 1, 00:14:56.892 "base_bdevs_list": [ 00:14:56.892 { 00:14:56.892 "name": null, 00:14:56.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.892 "is_configured": false, 00:14:56.892 "data_offset": 0, 00:14:56.892 "data_size": 7936 00:14:56.892 }, 00:14:56.892 { 00:14:56.892 "name": "BaseBdev2", 00:14:56.892 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:56.892 "is_configured": true, 00:14:56.892 "data_offset": 256, 00:14:56.892 "data_size": 7936 00:14:56.892 } 00:14:56.892 ] 00:14:56.892 }' 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.892 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.893 "name": "raid_bdev1", 00:14:56.893 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:56.893 "strip_size_kb": 0, 00:14:56.893 "state": "online", 00:14:56.893 "raid_level": "raid1", 00:14:56.893 "superblock": true, 00:14:56.893 "num_base_bdevs": 2, 00:14:56.893 "num_base_bdevs_discovered": 1, 00:14:56.893 "num_base_bdevs_operational": 1, 00:14:56.893 "base_bdevs_list": [ 00:14:56.893 { 00:14:56.893 "name": null, 00:14:56.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.893 "is_configured": false, 00:14:56.893 "data_offset": 0, 00:14:56.893 "data_size": 7936 00:14:56.893 }, 00:14:56.893 { 00:14:56.893 "name": "BaseBdev2", 00:14:56.893 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:56.893 "is_configured": true, 00:14:56.893 "data_offset": 256, 00:14:56.893 "data_size": 7936 00:14:56.893 } 00:14:56.893 ] 00:14:56.893 }' 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:56.893 [2024-10-30 09:49:35.498334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.893 [2024-10-30 09:49:35.498489] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:56.893 [2024-10-30 09:49:35.498502] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:56.893 request: 00:14:56.893 { 00:14:56.893 "base_bdev": "BaseBdev1", 00:14:56.893 "raid_bdev": "raid_bdev1", 00:14:56.893 "method": "bdev_raid_add_base_bdev", 00:14:56.893 "req_id": 1 00:14:56.893 } 00:14:56.893 Got JSON-RPC error response 00:14:56.893 response: 00:14:56.893 { 00:14:56.893 "code": -22, 00:14:56.893 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:56.893 } 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.893 09:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.266 "name": "raid_bdev1", 00:14:58.266 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:58.266 "strip_size_kb": 0, 00:14:58.266 "state": "online", 00:14:58.266 "raid_level": "raid1", 00:14:58.266 "superblock": true, 00:14:58.266 "num_base_bdevs": 2, 00:14:58.266 "num_base_bdevs_discovered": 1, 00:14:58.266 "num_base_bdevs_operational": 1, 00:14:58.266 "base_bdevs_list": [ 00:14:58.266 { 00:14:58.266 "name": null, 00:14:58.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.266 "is_configured": false, 00:14:58.266 "data_offset": 0, 00:14:58.266 "data_size": 7936 00:14:58.266 }, 00:14:58.266 { 00:14:58.266 "name": "BaseBdev2", 00:14:58.266 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:58.266 "is_configured": true, 00:14:58.266 "data_offset": 256, 00:14:58.266 "data_size": 7936 00:14:58.266 } 00:14:58.266 ] 00:14:58.266 }' 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.266 "name": "raid_bdev1", 00:14:58.266 "uuid": "9c7702ec-5012-4df4-9135-9248e7862351", 00:14:58.266 "strip_size_kb": 0, 00:14:58.266 "state": "online", 00:14:58.266 "raid_level": "raid1", 00:14:58.266 "superblock": true, 00:14:58.266 "num_base_bdevs": 2, 00:14:58.266 "num_base_bdevs_discovered": 1, 00:14:58.266 "num_base_bdevs_operational": 1, 00:14:58.266 "base_bdevs_list": [ 00:14:58.266 { 00:14:58.266 "name": null, 00:14:58.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.266 "is_configured": false, 00:14:58.266 "data_offset": 0, 00:14:58.266 "data_size": 7936 00:14:58.266 }, 00:14:58.266 { 00:14:58.266 "name": "BaseBdev2", 00:14:58.266 "uuid": "ee559721-546d-5268-837f-e32e08a91db2", 00:14:58.266 "is_configured": true, 00:14:58.266 "data_offset": 256, 00:14:58.266 "data_size": 7936 00:14:58.266 } 00:14:58.266 ] 00:14:58.266 }' 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.266 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85259 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 85259 ']' 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 85259 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85259 00:14:58.524 killing process with pid 85259 00:14:58.524 Received shutdown signal, test time was about 60.000000 seconds 00:14:58.524 00:14:58.524 Latency(us) 00:14:58.524 [2024-10-30T09:49:37.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.524 [2024-10-30T09:49:37.144Z] =================================================================================================================== 00:14:58.524 [2024-10-30T09:49:37.144Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85259' 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 85259 00:14:58.524 [2024-10-30 09:49:36.932609] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.524 09:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 85259 00:14:58.524 [2024-10-30 09:49:36.932737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.524 [2024-10-30 09:49:36.932783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.524 [2024-10-30 09:49:36.932793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:58.524 [2024-10-30 09:49:37.094602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.091 09:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:14:59.091 00:14:59.091 real 0m16.687s 00:14:59.091 user 0m21.260s 00:14:59.091 sys 0m1.842s 00:14:59.091 ************************************ 00:14:59.091 END TEST raid_rebuild_test_sb_md_separate 00:14:59.091 ************************************ 00:14:59.091 09:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:59.091 09:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:59.091 09:49:37 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:14:59.091 09:49:37 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:14:59.091 09:49:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:59.091 09:49:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:59.091 09:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.091 ************************************ 00:14:59.091 START TEST raid_state_function_test_sb_md_interleaved 00:14:59.091 ************************************ 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.091 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.092 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.350 Process raid pid: 85929 00:14:59.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85929 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85929' 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85929 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 85929 ']' 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:59.350 09:49:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.350 [2024-10-30 09:49:37.768581] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:14:59.350 [2024-10-30 09:49:37.769006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.350 [2024-10-30 09:49:37.923520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.608 [2024-10-30 09:49:38.018792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.608 [2024-10-30 09:49:38.138966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.608 [2024-10-30 09:49:38.138997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.177 [2024-10-30 09:49:38.612233] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.177 [2024-10-30 09:49:38.612284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.177 [2024-10-30 09:49:38.612292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.177 [2024-10-30 09:49:38.612300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.177 "name": "Existed_Raid", 00:15:00.177 "uuid": "8797d914-ff4b-4068-aa8d-f00462e258bf", 00:15:00.177 "strip_size_kb": 0, 00:15:00.177 "state": "configuring", 00:15:00.177 "raid_level": "raid1", 00:15:00.177 "superblock": true, 00:15:00.177 "num_base_bdevs": 2, 00:15:00.177 "num_base_bdevs_discovered": 0, 00:15:00.177 "num_base_bdevs_operational": 2, 00:15:00.177 "base_bdevs_list": [ 00:15:00.177 { 00:15:00.177 "name": "BaseBdev1", 00:15:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.177 "is_configured": false, 00:15:00.177 "data_offset": 0, 00:15:00.177 "data_size": 0 00:15:00.177 }, 00:15:00.177 { 00:15:00.177 "name": "BaseBdev2", 00:15:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.177 "is_configured": false, 00:15:00.177 "data_offset": 0, 00:15:00.177 "data_size": 0 00:15:00.177 } 00:15:00.177 ] 00:15:00.177 }' 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.177 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 [2024-10-30 09:49:38.924231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.436 [2024-10-30 09:49:38.924257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 [2024-10-30 09:49:38.932244] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.436 [2024-10-30 09:49:38.932277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.436 [2024-10-30 09:49:38.932284] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.436 [2024-10-30 09:49:38.932293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 [2024-10-30 09:49:38.961689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.436 BaseBdev1 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.436 [ 00:15:00.436 { 00:15:00.436 "name": "BaseBdev1", 00:15:00.436 "aliases": [ 00:15:00.436 "a447ac12-1ce6-4604-b7d7-3cd160123037" 00:15:00.436 ], 00:15:00.436 "product_name": "Malloc disk", 00:15:00.436 "block_size": 4128, 00:15:00.436 "num_blocks": 8192, 00:15:00.436 "uuid": "a447ac12-1ce6-4604-b7d7-3cd160123037", 00:15:00.436 "md_size": 32, 00:15:00.436 "md_interleave": true, 00:15:00.436 "dif_type": 0, 00:15:00.436 "assigned_rate_limits": { 00:15:00.436 "rw_ios_per_sec": 0, 00:15:00.436 "rw_mbytes_per_sec": 0, 00:15:00.436 "r_mbytes_per_sec": 0, 00:15:00.436 "w_mbytes_per_sec": 0 00:15:00.436 }, 00:15:00.436 "claimed": true, 00:15:00.436 "claim_type": "exclusive_write", 00:15:00.436 "zoned": false, 00:15:00.436 "supported_io_types": { 00:15:00.436 "read": true, 00:15:00.436 "write": true, 00:15:00.436 "unmap": true, 00:15:00.436 "flush": true, 00:15:00.436 "reset": true, 00:15:00.436 "nvme_admin": false, 00:15:00.436 "nvme_io": false, 00:15:00.436 "nvme_io_md": false, 00:15:00.436 "write_zeroes": true, 00:15:00.436 "zcopy": true, 00:15:00.436 "get_zone_info": false, 00:15:00.436 "zone_management": false, 00:15:00.436 "zone_append": false, 00:15:00.436 "compare": false, 00:15:00.436 "compare_and_write": false, 00:15:00.436 "abort": true, 00:15:00.436 "seek_hole": false, 00:15:00.436 "seek_data": false, 00:15:00.436 "copy": true, 00:15:00.436 "nvme_iov_md": false 00:15:00.436 }, 00:15:00.436 "memory_domains": [ 00:15:00.436 { 00:15:00.436 "dma_device_id": "system", 00:15:00.436 "dma_device_type": 1 00:15:00.436 }, 00:15:00.436 { 00:15:00.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.436 "dma_device_type": 2 00:15:00.436 } 00:15:00.436 ], 00:15:00.436 "driver_specific": {} 00:15:00.436 } 00:15:00.436 ] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.436 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.437 09:49:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.437 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.437 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.437 "name": "Existed_Raid", 00:15:00.437 "uuid": "58537248-6b9c-4269-9a7a-71c0b3b02d39", 00:15:00.437 "strip_size_kb": 0, 00:15:00.437 "state": "configuring", 00:15:00.437 "raid_level": "raid1", 00:15:00.437 "superblock": true, 00:15:00.437 "num_base_bdevs": 2, 00:15:00.437 "num_base_bdevs_discovered": 1, 00:15:00.437 "num_base_bdevs_operational": 2, 00:15:00.437 "base_bdevs_list": [ 00:15:00.437 { 00:15:00.437 "name": "BaseBdev1", 00:15:00.437 "uuid": "a447ac12-1ce6-4604-b7d7-3cd160123037", 00:15:00.437 "is_configured": true, 00:15:00.437 "data_offset": 256, 00:15:00.437 "data_size": 7936 00:15:00.437 }, 00:15:00.437 { 00:15:00.437 "name": "BaseBdev2", 00:15:00.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.437 "is_configured": false, 00:15:00.437 "data_offset": 0, 00:15:00.437 "data_size": 0 00:15:00.437 } 00:15:00.437 ] 00:15:00.437 }' 00:15:00.437 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.437 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.695 [2024-10-30 09:49:39.293784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.695 [2024-10-30 09:49:39.293823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.695 [2024-10-30 09:49:39.301848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.695 [2024-10-30 09:49:39.303553] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.695 [2024-10-30 09:49:39.303666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.695 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.952 "name": "Existed_Raid", 00:15:00.952 "uuid": "26a3553a-b1d2-41b4-8eeb-6af3c3959517", 00:15:00.952 "strip_size_kb": 0, 00:15:00.952 "state": "configuring", 00:15:00.952 "raid_level": "raid1", 00:15:00.952 "superblock": true, 00:15:00.952 "num_base_bdevs": 2, 00:15:00.952 "num_base_bdevs_discovered": 1, 00:15:00.952 "num_base_bdevs_operational": 2, 00:15:00.952 "base_bdevs_list": [ 00:15:00.952 { 00:15:00.952 "name": "BaseBdev1", 00:15:00.952 "uuid": "a447ac12-1ce6-4604-b7d7-3cd160123037", 00:15:00.952 "is_configured": true, 00:15:00.952 "data_offset": 256, 00:15:00.952 "data_size": 7936 00:15:00.952 }, 00:15:00.952 { 00:15:00.952 "name": "BaseBdev2", 00:15:00.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.952 "is_configured": false, 00:15:00.952 "data_offset": 0, 00:15:00.952 "data_size": 0 00:15:00.952 } 00:15:00.952 ] 00:15:00.952 }' 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.952 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 [2024-10-30 09:49:39.656238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.210 [2024-10-30 09:49:39.656428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:01.210 [2024-10-30 09:49:39.656441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:01.210 [2024-10-30 09:49:39.656525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:01.210 BaseBdev2 00:15:01.210 [2024-10-30 09:49:39.656591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:01.210 [2024-10-30 09:49:39.656602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:01.210 [2024-10-30 09:49:39.656659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 [ 00:15:01.210 { 00:15:01.210 "name": "BaseBdev2", 00:15:01.210 "aliases": [ 00:15:01.210 "ca7196d6-cde7-4c87-ac90-6a76ead36b58" 00:15:01.210 ], 00:15:01.210 "product_name": "Malloc disk", 00:15:01.210 "block_size": 4128, 00:15:01.210 "num_blocks": 8192, 00:15:01.210 "uuid": "ca7196d6-cde7-4c87-ac90-6a76ead36b58", 00:15:01.210 "md_size": 32, 00:15:01.210 "md_interleave": true, 00:15:01.210 "dif_type": 0, 00:15:01.210 "assigned_rate_limits": { 00:15:01.210 "rw_ios_per_sec": 0, 00:15:01.210 "rw_mbytes_per_sec": 0, 00:15:01.210 "r_mbytes_per_sec": 0, 00:15:01.210 "w_mbytes_per_sec": 0 00:15:01.210 }, 00:15:01.210 "claimed": true, 00:15:01.210 "claim_type": "exclusive_write", 00:15:01.210 "zoned": false, 00:15:01.210 "supported_io_types": { 00:15:01.210 "read": true, 00:15:01.211 "write": true, 00:15:01.211 "unmap": true, 00:15:01.211 "flush": true, 00:15:01.211 "reset": true, 00:15:01.211 "nvme_admin": false, 00:15:01.211 "nvme_io": false, 00:15:01.211 "nvme_io_md": false, 00:15:01.211 "write_zeroes": true, 00:15:01.211 "zcopy": true, 00:15:01.211 "get_zone_info": false, 00:15:01.211 "zone_management": false, 00:15:01.211 "zone_append": false, 00:15:01.211 "compare": false, 00:15:01.211 "compare_and_write": false, 00:15:01.211 "abort": true, 00:15:01.211 "seek_hole": false, 00:15:01.211 "seek_data": false, 00:15:01.211 "copy": true, 00:15:01.211 "nvme_iov_md": false 00:15:01.211 }, 00:15:01.211 "memory_domains": [ 00:15:01.211 { 00:15:01.211 "dma_device_id": "system", 00:15:01.211 "dma_device_type": 1 00:15:01.211 }, 00:15:01.211 { 00:15:01.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.211 "dma_device_type": 2 00:15:01.211 } 00:15:01.211 ], 00:15:01.211 "driver_specific": {} 00:15:01.211 } 00:15:01.211 ] 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.211 "name": "Existed_Raid", 00:15:01.211 "uuid": "26a3553a-b1d2-41b4-8eeb-6af3c3959517", 00:15:01.211 "strip_size_kb": 0, 00:15:01.211 "state": "online", 00:15:01.211 "raid_level": "raid1", 00:15:01.211 "superblock": true, 00:15:01.211 "num_base_bdevs": 2, 00:15:01.211 "num_base_bdevs_discovered": 2, 00:15:01.211 "num_base_bdevs_operational": 2, 00:15:01.211 "base_bdevs_list": [ 00:15:01.211 { 00:15:01.211 "name": "BaseBdev1", 00:15:01.211 "uuid": "a447ac12-1ce6-4604-b7d7-3cd160123037", 00:15:01.211 "is_configured": true, 00:15:01.211 "data_offset": 256, 00:15:01.211 "data_size": 7936 00:15:01.211 }, 00:15:01.211 { 00:15:01.211 "name": "BaseBdev2", 00:15:01.211 "uuid": "ca7196d6-cde7-4c87-ac90-6a76ead36b58", 00:15:01.211 "is_configured": true, 00:15:01.211 "data_offset": 256, 00:15:01.211 "data_size": 7936 00:15:01.211 } 00:15:01.211 ] 00:15:01.211 }' 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.211 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.469 [2024-10-30 09:49:39.988679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.469 09:49:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.469 "name": "Existed_Raid", 00:15:01.469 "aliases": [ 00:15:01.469 "26a3553a-b1d2-41b4-8eeb-6af3c3959517" 00:15:01.469 ], 00:15:01.469 "product_name": "Raid Volume", 00:15:01.469 "block_size": 4128, 00:15:01.469 "num_blocks": 7936, 00:15:01.469 "uuid": "26a3553a-b1d2-41b4-8eeb-6af3c3959517", 00:15:01.469 "md_size": 32, 00:15:01.469 "md_interleave": true, 00:15:01.469 "dif_type": 0, 00:15:01.469 "assigned_rate_limits": { 00:15:01.469 "rw_ios_per_sec": 0, 00:15:01.469 "rw_mbytes_per_sec": 0, 00:15:01.469 "r_mbytes_per_sec": 0, 00:15:01.469 "w_mbytes_per_sec": 0 00:15:01.469 }, 00:15:01.469 "claimed": false, 00:15:01.469 "zoned": false, 00:15:01.469 "supported_io_types": { 00:15:01.469 "read": true, 00:15:01.469 "write": true, 00:15:01.469 "unmap": false, 00:15:01.469 "flush": false, 00:15:01.469 "reset": true, 00:15:01.469 "nvme_admin": false, 00:15:01.469 "nvme_io": false, 00:15:01.469 "nvme_io_md": false, 00:15:01.469 "write_zeroes": true, 00:15:01.469 "zcopy": false, 00:15:01.469 "get_zone_info": false, 00:15:01.469 "zone_management": false, 00:15:01.469 "zone_append": false, 00:15:01.469 "compare": false, 00:15:01.469 "compare_and_write": false, 00:15:01.469 "abort": false, 00:15:01.469 "seek_hole": false, 00:15:01.469 "seek_data": false, 00:15:01.469 "copy": false, 00:15:01.469 "nvme_iov_md": false 00:15:01.469 }, 00:15:01.469 "memory_domains": [ 00:15:01.469 { 00:15:01.469 "dma_device_id": "system", 00:15:01.469 "dma_device_type": 1 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.469 "dma_device_type": 2 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "dma_device_id": "system", 00:15:01.469 "dma_device_type": 1 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.469 "dma_device_type": 2 00:15:01.469 } 00:15:01.469 ], 00:15:01.469 "driver_specific": { 00:15:01.469 "raid": { 00:15:01.469 "uuid": "26a3553a-b1d2-41b4-8eeb-6af3c3959517", 00:15:01.469 "strip_size_kb": 0, 00:15:01.469 "state": "online", 00:15:01.469 "raid_level": "raid1", 00:15:01.469 "superblock": true, 00:15:01.469 "num_base_bdevs": 2, 00:15:01.469 "num_base_bdevs_discovered": 2, 00:15:01.469 "num_base_bdevs_operational": 2, 00:15:01.469 "base_bdevs_list": [ 00:15:01.469 { 00:15:01.469 "name": "BaseBdev1", 00:15:01.469 "uuid": "a447ac12-1ce6-4604-b7d7-3cd160123037", 00:15:01.469 "is_configured": true, 00:15:01.469 "data_offset": 256, 00:15:01.469 "data_size": 7936 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "name": "BaseBdev2", 00:15:01.469 "uuid": "ca7196d6-cde7-4c87-ac90-6a76ead36b58", 00:15:01.469 "is_configured": true, 00:15:01.469 "data_offset": 256, 00:15:01.469 "data_size": 7936 00:15:01.469 } 00:15:01.469 ] 00:15:01.469 } 00:15:01.469 } 00:15:01.469 }' 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:01.469 BaseBdev2' 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.469 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 [2024-10-30 09:49:40.156426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.728 "name": "Existed_Raid", 00:15:01.728 "uuid": "26a3553a-b1d2-41b4-8eeb-6af3c3959517", 00:15:01.728 "strip_size_kb": 0, 00:15:01.728 "state": "online", 00:15:01.728 "raid_level": "raid1", 00:15:01.728 "superblock": true, 00:15:01.728 "num_base_bdevs": 2, 00:15:01.728 "num_base_bdevs_discovered": 1, 00:15:01.728 "num_base_bdevs_operational": 1, 00:15:01.728 "base_bdevs_list": [ 00:15:01.728 { 00:15:01.728 "name": null, 00:15:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.728 "is_configured": false, 00:15:01.728 "data_offset": 0, 00:15:01.728 "data_size": 7936 00:15:01.728 }, 00:15:01.728 { 00:15:01.728 "name": "BaseBdev2", 00:15:01.728 "uuid": "ca7196d6-cde7-4c87-ac90-6a76ead36b58", 00:15:01.728 "is_configured": true, 00:15:01.728 "data_offset": 256, 00:15:01.728 "data_size": 7936 00:15:01.728 } 00:15:01.728 ] 00:15:01.728 }' 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.728 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.986 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.986 [2024-10-30 09:49:40.571424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.986 [2024-10-30 09:49:40.571518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.245 [2024-10-30 09:49:40.630532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.245 [2024-10-30 09:49:40.630575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.245 [2024-10-30 09:49:40.630586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85929 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 85929 ']' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 85929 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85929 00:15:02.245 killing process with pid 85929 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85929' 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 85929 00:15:02.245 [2024-10-30 09:49:40.691686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.245 09:49:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 85929 00:15:02.245 [2024-10-30 09:49:40.702148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.817 ************************************ 00:15:02.817 END TEST raid_state_function_test_sb_md_interleaved 00:15:02.817 ************************************ 00:15:02.817 09:49:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:02.817 00:15:02.817 real 0m3.706s 00:15:02.817 user 0m5.336s 00:15:02.817 sys 0m0.598s 00:15:02.817 09:49:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:02.817 09:49:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.077 09:49:41 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:03.077 09:49:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:03.077 09:49:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:03.077 09:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.077 ************************************ 00:15:03.077 START TEST raid_superblock_test_md_interleaved 00:15:03.077 ************************************ 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:03.077 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:03.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86165 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86165 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86165 ']' 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.078 09:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:03.078 [2024-10-30 09:49:41.530500] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:03.078 [2024-10-30 09:49:41.531091] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86165 ] 00:15:03.078 [2024-10-30 09:49:41.691742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.339 [2024-10-30 09:49:41.793580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.339 [2024-10-30 09:49:41.929225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.339 [2024-10-30 09:49:41.929271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 malloc1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 [2024-10-30 09:49:42.445713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.912 [2024-10-30 09:49:42.446096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.912 [2024-10-30 09:49:42.446252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.912 [2024-10-30 09:49:42.446365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.912 [2024-10-30 09:49:42.448293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.912 [2024-10-30 09:49:42.448467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.912 pt1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 malloc2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 [2024-10-30 09:49:42.485671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.912 [2024-10-30 09:49:42.485801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.912 [2024-10-30 09:49:42.485950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.912 [2024-10-30 09:49:42.486020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.912 [2024-10-30 09:49:42.487898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.912 [2024-10-30 09:49:42.488130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.912 pt2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 [2024-10-30 09:49:42.493708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.912 [2024-10-30 09:49:42.495627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.912 [2024-10-30 09:49:42.495798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.912 [2024-10-30 09:49:42.495810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:03.912 [2024-10-30 09:49:42.495880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:03.912 [2024-10-30 09:49:42.495945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.912 [2024-10-30 09:49:42.495955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.912 [2024-10-30 09:49:42.496018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.912 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.171 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.171 "name": "raid_bdev1", 00:15:04.171 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:04.171 "strip_size_kb": 0, 00:15:04.171 "state": "online", 00:15:04.171 "raid_level": "raid1", 00:15:04.171 "superblock": true, 00:15:04.171 "num_base_bdevs": 2, 00:15:04.171 "num_base_bdevs_discovered": 2, 00:15:04.171 "num_base_bdevs_operational": 2, 00:15:04.171 "base_bdevs_list": [ 00:15:04.171 { 00:15:04.171 "name": "pt1", 00:15:04.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.171 "is_configured": true, 00:15:04.171 "data_offset": 256, 00:15:04.171 "data_size": 7936 00:15:04.171 }, 00:15:04.171 { 00:15:04.171 "name": "pt2", 00:15:04.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.171 "is_configured": true, 00:15:04.171 "data_offset": 256, 00:15:04.171 "data_size": 7936 00:15:04.171 } 00:15:04.171 ] 00:15:04.171 }' 00:15:04.171 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.171 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.440 [2024-10-30 09:49:42.842088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.440 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.440 "name": "raid_bdev1", 00:15:04.440 "aliases": [ 00:15:04.440 "74ef28f8-2c4f-404e-a619-e0b1b5d4140c" 00:15:04.440 ], 00:15:04.440 "product_name": "Raid Volume", 00:15:04.440 "block_size": 4128, 00:15:04.440 "num_blocks": 7936, 00:15:04.440 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:04.440 "md_size": 32, 00:15:04.440 "md_interleave": true, 00:15:04.440 "dif_type": 0, 00:15:04.440 "assigned_rate_limits": { 00:15:04.440 "rw_ios_per_sec": 0, 00:15:04.440 "rw_mbytes_per_sec": 0, 00:15:04.440 "r_mbytes_per_sec": 0, 00:15:04.440 "w_mbytes_per_sec": 0 00:15:04.440 }, 00:15:04.440 "claimed": false, 00:15:04.440 "zoned": false, 00:15:04.440 "supported_io_types": { 00:15:04.440 "read": true, 00:15:04.440 "write": true, 00:15:04.440 "unmap": false, 00:15:04.440 "flush": false, 00:15:04.440 "reset": true, 00:15:04.440 "nvme_admin": false, 00:15:04.440 "nvme_io": false, 00:15:04.440 "nvme_io_md": false, 00:15:04.440 "write_zeroes": true, 00:15:04.440 "zcopy": false, 00:15:04.440 "get_zone_info": false, 00:15:04.440 "zone_management": false, 00:15:04.440 "zone_append": false, 00:15:04.440 "compare": false, 00:15:04.440 "compare_and_write": false, 00:15:04.440 "abort": false, 00:15:04.440 "seek_hole": false, 00:15:04.440 "seek_data": false, 00:15:04.440 "copy": false, 00:15:04.440 "nvme_iov_md": false 00:15:04.440 }, 00:15:04.440 "memory_domains": [ 00:15:04.440 { 00:15:04.440 "dma_device_id": "system", 00:15:04.440 "dma_device_type": 1 00:15:04.440 }, 00:15:04.440 { 00:15:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.440 "dma_device_type": 2 00:15:04.440 }, 00:15:04.440 { 00:15:04.440 "dma_device_id": "system", 00:15:04.440 "dma_device_type": 1 00:15:04.440 }, 00:15:04.440 { 00:15:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.440 "dma_device_type": 2 00:15:04.440 } 00:15:04.440 ], 00:15:04.440 "driver_specific": { 00:15:04.440 "raid": { 00:15:04.440 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:04.440 "strip_size_kb": 0, 00:15:04.440 "state": "online", 00:15:04.440 "raid_level": "raid1", 00:15:04.440 "superblock": true, 00:15:04.440 "num_base_bdevs": 2, 00:15:04.440 "num_base_bdevs_discovered": 2, 00:15:04.440 "num_base_bdevs_operational": 2, 00:15:04.440 "base_bdevs_list": [ 00:15:04.440 { 00:15:04.440 "name": "pt1", 00:15:04.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.441 "is_configured": true, 00:15:04.441 "data_offset": 256, 00:15:04.441 "data_size": 7936 00:15:04.441 }, 00:15:04.441 { 00:15:04.441 "name": "pt2", 00:15:04.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.441 "is_configured": true, 00:15:04.441 "data_offset": 256, 00:15:04.441 "data_size": 7936 00:15:04.441 } 00:15:04.441 ] 00:15:04.441 } 00:15:04.441 } 00:15:04.441 }' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.441 pt2' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.441 09:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:04.441 [2024-10-30 09:49:43.014104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74ef28f8-2c4f-404e-a619-e0b1b5d4140c 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 74ef28f8-2c4f-404e-a619-e0b1b5d4140c ']' 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.441 [2024-10-30 09:49:43.045793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.441 [2024-10-30 09:49:43.045894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.441 [2024-10-30 09:49:43.045977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.441 [2024-10-30 09:49:43.046035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.441 [2024-10-30 09:49:43.046046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.441 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.704 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 [2024-10-30 09:49:43.137868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:04.705 [2024-10-30 09:49:43.139744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:04.705 [2024-10-30 09:49:43.139818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:04.705 [2024-10-30 09:49:43.139864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:04.705 [2024-10-30 09:49:43.139880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.705 [2024-10-30 09:49:43.139891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:04.705 request: 00:15:04.705 { 00:15:04.705 "name": "raid_bdev1", 00:15:04.705 "raid_level": "raid1", 00:15:04.705 "base_bdevs": [ 00:15:04.705 "malloc1", 00:15:04.705 "malloc2" 00:15:04.705 ], 00:15:04.705 "superblock": false, 00:15:04.705 "method": "bdev_raid_create", 00:15:04.705 "req_id": 1 00:15:04.705 } 00:15:04.705 Got JSON-RPC error response 00:15:04.705 response: 00:15:04.705 { 00:15:04.705 "code": -17, 00:15:04.705 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:04.705 } 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 [2024-10-30 09:49:43.181832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.705 [2024-10-30 09:49:43.182108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.705 [2024-10-30 09:49:43.182194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:04.705 [2024-10-30 09:49:43.182246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.705 [2024-10-30 09:49:43.184182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.705 [2024-10-30 09:49:43.184358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.705 [2024-10-30 09:49:43.184465] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:04.705 [2024-10-30 09:49:43.184524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.705 pt1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.705 "name": "raid_bdev1", 00:15:04.705 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:04.705 "strip_size_kb": 0, 00:15:04.705 "state": "configuring", 00:15:04.705 "raid_level": "raid1", 00:15:04.705 "superblock": true, 00:15:04.705 "num_base_bdevs": 2, 00:15:04.705 "num_base_bdevs_discovered": 1, 00:15:04.705 "num_base_bdevs_operational": 2, 00:15:04.705 "base_bdevs_list": [ 00:15:04.705 { 00:15:04.705 "name": "pt1", 00:15:04.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.705 "is_configured": true, 00:15:04.705 "data_offset": 256, 00:15:04.705 "data_size": 7936 00:15:04.705 }, 00:15:04.705 { 00:15:04.705 "name": null, 00:15:04.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.705 "is_configured": false, 00:15:04.705 "data_offset": 256, 00:15:04.705 "data_size": 7936 00:15:04.705 } 00:15:04.705 ] 00:15:04.705 }' 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.705 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.967 [2024-10-30 09:49:43.501907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.967 [2024-10-30 09:49:43.502233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.967 [2024-10-30 09:49:43.502311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:04.967 [2024-10-30 09:49:43.502362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.967 [2024-10-30 09:49:43.502543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.967 [2024-10-30 09:49:43.502668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.967 [2024-10-30 09:49:43.502789] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:04.967 [2024-10-30 09:49:43.502832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.967 [2024-10-30 09:49:43.502937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:04.967 [2024-10-30 09:49:43.503043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:04.967 [2024-10-30 09:49:43.503130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:04.967 [2024-10-30 09:49:43.503200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:04.967 [2024-10-30 09:49:43.503208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:04.967 [2024-10-30 09:49:43.503268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.967 pt2 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.967 "name": "raid_bdev1", 00:15:04.967 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:04.967 "strip_size_kb": 0, 00:15:04.967 "state": "online", 00:15:04.967 "raid_level": "raid1", 00:15:04.967 "superblock": true, 00:15:04.967 "num_base_bdevs": 2, 00:15:04.967 "num_base_bdevs_discovered": 2, 00:15:04.967 "num_base_bdevs_operational": 2, 00:15:04.967 "base_bdevs_list": [ 00:15:04.967 { 00:15:04.967 "name": "pt1", 00:15:04.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.967 "is_configured": true, 00:15:04.967 "data_offset": 256, 00:15:04.967 "data_size": 7936 00:15:04.967 }, 00:15:04.967 { 00:15:04.967 "name": "pt2", 00:15:04.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.967 "is_configured": true, 00:15:04.967 "data_offset": 256, 00:15:04.967 "data_size": 7936 00:15:04.967 } 00:15:04.967 ] 00:15:04.967 }' 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.967 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.539 [2024-10-30 09:49:43.858219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.539 "name": "raid_bdev1", 00:15:05.539 "aliases": [ 00:15:05.539 "74ef28f8-2c4f-404e-a619-e0b1b5d4140c" 00:15:05.539 ], 00:15:05.539 "product_name": "Raid Volume", 00:15:05.539 "block_size": 4128, 00:15:05.539 "num_blocks": 7936, 00:15:05.539 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:05.539 "md_size": 32, 00:15:05.539 "md_interleave": true, 00:15:05.539 "dif_type": 0, 00:15:05.539 "assigned_rate_limits": { 00:15:05.539 "rw_ios_per_sec": 0, 00:15:05.539 "rw_mbytes_per_sec": 0, 00:15:05.539 "r_mbytes_per_sec": 0, 00:15:05.539 "w_mbytes_per_sec": 0 00:15:05.539 }, 00:15:05.539 "claimed": false, 00:15:05.539 "zoned": false, 00:15:05.539 "supported_io_types": { 00:15:05.539 "read": true, 00:15:05.539 "write": true, 00:15:05.539 "unmap": false, 00:15:05.539 "flush": false, 00:15:05.539 "reset": true, 00:15:05.539 "nvme_admin": false, 00:15:05.539 "nvme_io": false, 00:15:05.539 "nvme_io_md": false, 00:15:05.539 "write_zeroes": true, 00:15:05.539 "zcopy": false, 00:15:05.539 "get_zone_info": false, 00:15:05.539 "zone_management": false, 00:15:05.539 "zone_append": false, 00:15:05.539 "compare": false, 00:15:05.539 "compare_and_write": false, 00:15:05.539 "abort": false, 00:15:05.539 "seek_hole": false, 00:15:05.539 "seek_data": false, 00:15:05.539 "copy": false, 00:15:05.539 "nvme_iov_md": false 00:15:05.539 }, 00:15:05.539 "memory_domains": [ 00:15:05.539 { 00:15:05.539 "dma_device_id": "system", 00:15:05.539 "dma_device_type": 1 00:15:05.539 }, 00:15:05.539 { 00:15:05.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.539 "dma_device_type": 2 00:15:05.539 }, 00:15:05.539 { 00:15:05.539 "dma_device_id": "system", 00:15:05.539 "dma_device_type": 1 00:15:05.539 }, 00:15:05.539 { 00:15:05.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.539 "dma_device_type": 2 00:15:05.539 } 00:15:05.539 ], 00:15:05.539 "driver_specific": { 00:15:05.539 "raid": { 00:15:05.539 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:05.539 "strip_size_kb": 0, 00:15:05.539 "state": "online", 00:15:05.539 "raid_level": "raid1", 00:15:05.539 "superblock": true, 00:15:05.539 "num_base_bdevs": 2, 00:15:05.539 "num_base_bdevs_discovered": 2, 00:15:05.539 "num_base_bdevs_operational": 2, 00:15:05.539 "base_bdevs_list": [ 00:15:05.539 { 00:15:05.539 "name": "pt1", 00:15:05.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.539 "is_configured": true, 00:15:05.539 "data_offset": 256, 00:15:05.539 "data_size": 7936 00:15:05.539 }, 00:15:05.539 { 00:15:05.539 "name": "pt2", 00:15:05.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.539 "is_configured": true, 00:15:05.539 "data_offset": 256, 00:15:05.539 "data_size": 7936 00:15:05.539 } 00:15:05.539 ] 00:15:05.539 } 00:15:05.539 } 00:15:05.539 }' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:05.539 pt2' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:05.539 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 09:49:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 [2024-10-30 09:49:44.026240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 74ef28f8-2c4f-404e-a619-e0b1b5d4140c '!=' 74ef28f8-2c4f-404e-a619-e0b1b5d4140c ']' 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 [2024-10-30 09:49:44.050031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.540 "name": "raid_bdev1", 00:15:05.540 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:05.540 "strip_size_kb": 0, 00:15:05.540 "state": "online", 00:15:05.540 "raid_level": "raid1", 00:15:05.540 "superblock": true, 00:15:05.540 "num_base_bdevs": 2, 00:15:05.540 "num_base_bdevs_discovered": 1, 00:15:05.540 "num_base_bdevs_operational": 1, 00:15:05.540 "base_bdevs_list": [ 00:15:05.540 { 00:15:05.540 "name": null, 00:15:05.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.540 "is_configured": false, 00:15:05.540 "data_offset": 0, 00:15:05.540 "data_size": 7936 00:15:05.540 }, 00:15:05.540 { 00:15:05.540 "name": "pt2", 00:15:05.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.540 "is_configured": true, 00:15:05.540 "data_offset": 256, 00:15:05.540 "data_size": 7936 00:15:05.540 } 00:15:05.540 ] 00:15:05.540 }' 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.540 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 [2024-10-30 09:49:44.378076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.830 [2024-10-30 09:49:44.378098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.830 [2024-10-30 09:49:44.378153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.830 [2024-10-30 09:49:44.378190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.830 [2024-10-30 09:49:44.378199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 [2024-10-30 09:49:44.426077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.830 [2024-10-30 09:49:44.426434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.830 [2024-10-30 09:49:44.426510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:05.830 [2024-10-30 09:49:44.426552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.830 [2024-10-30 09:49:44.428108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.830 [2024-10-30 09:49:44.428271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.830 [2024-10-30 09:49:44.428373] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.830 [2024-10-30 09:49:44.428451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.830 [2024-10-30 09:49:44.428554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:05.830 [2024-10-30 09:49:44.428578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:05.830 [2024-10-30 09:49:44.428659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:05.830 [2024-10-30 09:49:44.428719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:05.830 [2024-10-30 09:49:44.428775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:05.830 [2024-10-30 09:49:44.428845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.830 pt2 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.830 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.088 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.088 "name": "raid_bdev1", 00:15:06.088 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:06.088 "strip_size_kb": 0, 00:15:06.088 "state": "online", 00:15:06.088 "raid_level": "raid1", 00:15:06.088 "superblock": true, 00:15:06.088 "num_base_bdevs": 2, 00:15:06.088 "num_base_bdevs_discovered": 1, 00:15:06.088 "num_base_bdevs_operational": 1, 00:15:06.088 "base_bdevs_list": [ 00:15:06.088 { 00:15:06.088 "name": null, 00:15:06.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.088 "is_configured": false, 00:15:06.088 "data_offset": 256, 00:15:06.088 "data_size": 7936 00:15:06.088 }, 00:15:06.088 { 00:15:06.088 "name": "pt2", 00:15:06.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.088 "is_configured": true, 00:15:06.088 "data_offset": 256, 00:15:06.088 "data_size": 7936 00:15:06.088 } 00:15:06.088 ] 00:15:06.088 }' 00:15:06.088 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.088 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 [2024-10-30 09:49:44.746125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.346 [2024-10-30 09:49:44.746270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.346 [2024-10-30 09:49:44.746336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.346 [2024-10-30 09:49:44.746381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.346 [2024-10-30 09:49:44.746390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 [2024-10-30 09:49:44.790145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.346 [2024-10-30 09:49:44.790422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.346 [2024-10-30 09:49:44.790448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:06.346 [2024-10-30 09:49:44.790456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.346 [2024-10-30 09:49:44.792069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.346 [2024-10-30 09:49:44.792095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.346 [2024-10-30 09:49:44.792140] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.346 [2024-10-30 09:49:44.792178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.346 [2024-10-30 09:49:44.792252] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:06.346 [2024-10-30 09:49:44.792261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.346 [2024-10-30 09:49:44.792275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:06.346 [2024-10-30 09:49:44.792313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.346 [2024-10-30 09:49:44.792366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:06.346 [2024-10-30 09:49:44.792373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:06.346 [2024-10-30 09:49:44.792425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.346 [2024-10-30 09:49:44.792471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:06.346 [2024-10-30 09:49:44.792479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:06.346 [2024-10-30 09:49:44.792531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.346 pt1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.346 "name": "raid_bdev1", 00:15:06.346 "uuid": "74ef28f8-2c4f-404e-a619-e0b1b5d4140c", 00:15:06.346 "strip_size_kb": 0, 00:15:06.346 "state": "online", 00:15:06.346 "raid_level": "raid1", 00:15:06.346 "superblock": true, 00:15:06.346 "num_base_bdevs": 2, 00:15:06.346 "num_base_bdevs_discovered": 1, 00:15:06.346 "num_base_bdevs_operational": 1, 00:15:06.346 "base_bdevs_list": [ 00:15:06.346 { 00:15:06.346 "name": null, 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.346 "is_configured": false, 00:15:06.346 "data_offset": 256, 00:15:06.346 "data_size": 7936 00:15:06.346 }, 00:15:06.346 { 00:15:06.346 "name": "pt2", 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.346 "is_configured": true, 00:15:06.346 "data_offset": 256, 00:15:06.346 "data_size": 7936 00:15:06.346 } 00:15:06.346 ] 00:15:06.346 }' 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.346 09:49:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:06.605 [2024-10-30 09:49:45.126397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 74ef28f8-2c4f-404e-a619-e0b1b5d4140c '!=' 74ef28f8-2c4f-404e-a619-e0b1b5d4140c ']' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86165 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86165 ']' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86165 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86165 00:15:06.605 killing process with pid 86165 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86165' 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 86165 00:15:06.605 [2024-10-30 09:49:45.176557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.605 [2024-10-30 09:49:45.176621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.605 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 86165 00:15:06.605 [2024-10-30 09:49:45.176658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.605 [2024-10-30 09:49:45.176670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:06.862 [2024-10-30 09:49:45.278515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.430 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:07.430 00:15:07.430 real 0m4.371s 00:15:07.430 user 0m6.716s 00:15:07.430 sys 0m0.753s 00:15:07.430 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:07.430 ************************************ 00:15:07.430 END TEST raid_superblock_test_md_interleaved 00:15:07.430 ************************************ 00:15:07.430 09:49:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.430 09:49:45 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:07.430 09:49:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:07.430 09:49:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:07.430 09:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.430 ************************************ 00:15:07.430 START TEST raid_rebuild_test_sb_md_interleaved 00:15:07.430 ************************************ 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:07.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86471 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86471 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86471 ']' 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.430 09:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:07.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:07.430 Zero copy mechanism will not be used. 00:15:07.430 [2024-10-30 09:49:45.955124] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:07.430 [2024-10-30 09:49:45.955215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86471 ] 00:15:07.688 [2024-10-30 09:49:46.105886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.688 [2024-10-30 09:49:46.184637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.688 [2024-10-30 09:49:46.291236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.688 [2024-10-30 09:49:46.291370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.253 BaseBdev1_malloc 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.253 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.253 [2024-10-30 09:49:46.833418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.253 [2024-10-30 09:49:46.833566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.253 [2024-10-30 09:49:46.833601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.253 [2024-10-30 09:49:46.833656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.253 [2024-10-30 09:49:46.835157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.253 [2024-10-30 09:49:46.835254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.254 BaseBdev1 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.254 BaseBdev2_malloc 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.254 [2024-10-30 09:49:46.864163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.254 [2024-10-30 09:49:46.864208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.254 [2024-10-30 09:49:46.864223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.254 [2024-10-30 09:49:46.864233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.254 [2024-10-30 09:49:46.865714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.254 [2024-10-30 09:49:46.865743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.254 BaseBdev2 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.254 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.512 spare_malloc 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.512 spare_delay 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.512 [2024-10-30 09:49:46.917505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.512 [2024-10-30 09:49:46.917546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.512 [2024-10-30 09:49:46.917560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:08.512 [2024-10-30 09:49:46.917569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.512 [2024-10-30 09:49:46.919044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.512 [2024-10-30 09:49:46.919086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.512 spare 00:15:08.512 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.513 [2024-10-30 09:49:46.925531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.513 [2024-10-30 09:49:46.926948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.513 [2024-10-30 09:49:46.927093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.513 [2024-10-30 09:49:46.927106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:08.513 [2024-10-30 09:49:46.927161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.513 [2024-10-30 09:49:46.927213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.513 [2024-10-30 09:49:46.927219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.513 [2024-10-30 09:49:46.927270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.513 "name": "raid_bdev1", 00:15:08.513 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:08.513 "strip_size_kb": 0, 00:15:08.513 "state": "online", 00:15:08.513 "raid_level": "raid1", 00:15:08.513 "superblock": true, 00:15:08.513 "num_base_bdevs": 2, 00:15:08.513 "num_base_bdevs_discovered": 2, 00:15:08.513 "num_base_bdevs_operational": 2, 00:15:08.513 "base_bdevs_list": [ 00:15:08.513 { 00:15:08.513 "name": "BaseBdev1", 00:15:08.513 "uuid": "fbafe737-1fe0-5509-bdd8-03dc369fd545", 00:15:08.513 "is_configured": true, 00:15:08.513 "data_offset": 256, 00:15:08.513 "data_size": 7936 00:15:08.513 }, 00:15:08.513 { 00:15:08.513 "name": "BaseBdev2", 00:15:08.513 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:08.513 "is_configured": true, 00:15:08.513 "data_offset": 256, 00:15:08.513 "data_size": 7936 00:15:08.513 } 00:15:08.513 ] 00:15:08.513 }' 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.513 09:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:08.771 [2024-10-30 09:49:47.245788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.771 [2024-10-30 09:49:47.301565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.771 "name": "raid_bdev1", 00:15:08.771 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:08.771 "strip_size_kb": 0, 00:15:08.771 "state": "online", 00:15:08.771 "raid_level": "raid1", 00:15:08.771 "superblock": true, 00:15:08.771 "num_base_bdevs": 2, 00:15:08.771 "num_base_bdevs_discovered": 1, 00:15:08.771 "num_base_bdevs_operational": 1, 00:15:08.771 "base_bdevs_list": [ 00:15:08.771 { 00:15:08.771 "name": null, 00:15:08.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.771 "is_configured": false, 00:15:08.771 "data_offset": 0, 00:15:08.771 "data_size": 7936 00:15:08.771 }, 00:15:08.771 { 00:15:08.771 "name": "BaseBdev2", 00:15:08.771 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:08.771 "is_configured": true, 00:15:08.771 "data_offset": 256, 00:15:08.771 "data_size": 7936 00:15:08.771 } 00:15:08.771 ] 00:15:08.771 }' 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.771 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:09.029 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.029 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.029 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:09.029 [2024-10-30 09:49:47.625638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.029 [2024-10-30 09:49:47.634613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:09.029 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.029 09:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:09.029 [2024-10-30 09:49:47.636070] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.402 "name": "raid_bdev1", 00:15:10.402 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:10.402 "strip_size_kb": 0, 00:15:10.402 "state": "online", 00:15:10.402 "raid_level": "raid1", 00:15:10.402 "superblock": true, 00:15:10.402 "num_base_bdevs": 2, 00:15:10.402 "num_base_bdevs_discovered": 2, 00:15:10.402 "num_base_bdevs_operational": 2, 00:15:10.402 "process": { 00:15:10.402 "type": "rebuild", 00:15:10.402 "target": "spare", 00:15:10.402 "progress": { 00:15:10.402 "blocks": 2560, 00:15:10.402 "percent": 32 00:15:10.402 } 00:15:10.402 }, 00:15:10.402 "base_bdevs_list": [ 00:15:10.402 { 00:15:10.402 "name": "spare", 00:15:10.402 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:10.402 "is_configured": true, 00:15:10.402 "data_offset": 256, 00:15:10.402 "data_size": 7936 00:15:10.402 }, 00:15:10.402 { 00:15:10.402 "name": "BaseBdev2", 00:15:10.402 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:10.402 "is_configured": true, 00:15:10.402 "data_offset": 256, 00:15:10.402 "data_size": 7936 00:15:10.402 } 00:15:10.402 ] 00:15:10.402 }' 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.402 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.403 [2024-10-30 09:49:48.742346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.403 [2024-10-30 09:49:48.840682] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.403 [2024-10-30 09:49:48.840818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.403 [2024-10-30 09:49:48.840833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.403 [2024-10-30 09:49:48.840844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.403 "name": "raid_bdev1", 00:15:10.403 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:10.403 "strip_size_kb": 0, 00:15:10.403 "state": "online", 00:15:10.403 "raid_level": "raid1", 00:15:10.403 "superblock": true, 00:15:10.403 "num_base_bdevs": 2, 00:15:10.403 "num_base_bdevs_discovered": 1, 00:15:10.403 "num_base_bdevs_operational": 1, 00:15:10.403 "base_bdevs_list": [ 00:15:10.403 { 00:15:10.403 "name": null, 00:15:10.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.403 "is_configured": false, 00:15:10.403 "data_offset": 0, 00:15:10.403 "data_size": 7936 00:15:10.403 }, 00:15:10.403 { 00:15:10.403 "name": "BaseBdev2", 00:15:10.403 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:10.403 "is_configured": true, 00:15:10.403 "data_offset": 256, 00:15:10.403 "data_size": 7936 00:15:10.403 } 00:15:10.403 ] 00:15:10.403 }' 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.403 09:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.662 "name": "raid_bdev1", 00:15:10.662 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:10.662 "strip_size_kb": 0, 00:15:10.662 "state": "online", 00:15:10.662 "raid_level": "raid1", 00:15:10.662 "superblock": true, 00:15:10.662 "num_base_bdevs": 2, 00:15:10.662 "num_base_bdevs_discovered": 1, 00:15:10.662 "num_base_bdevs_operational": 1, 00:15:10.662 "base_bdevs_list": [ 00:15:10.662 { 00:15:10.662 "name": null, 00:15:10.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.662 "is_configured": false, 00:15:10.662 "data_offset": 0, 00:15:10.662 "data_size": 7936 00:15:10.662 }, 00:15:10.662 { 00:15:10.662 "name": "BaseBdev2", 00:15:10.662 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:10.662 "is_configured": true, 00:15:10.662 "data_offset": 256, 00:15:10.662 "data_size": 7936 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 }' 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 [2024-10-30 09:49:49.278842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.920 [2024-10-30 09:49:49.287210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.920 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.920 09:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:10.920 [2024-10-30 09:49:49.288679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.853 "name": "raid_bdev1", 00:15:11.853 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:11.853 "strip_size_kb": 0, 00:15:11.853 "state": "online", 00:15:11.853 "raid_level": "raid1", 00:15:11.853 "superblock": true, 00:15:11.853 "num_base_bdevs": 2, 00:15:11.853 "num_base_bdevs_discovered": 2, 00:15:11.853 "num_base_bdevs_operational": 2, 00:15:11.853 "process": { 00:15:11.853 "type": "rebuild", 00:15:11.853 "target": "spare", 00:15:11.853 "progress": { 00:15:11.853 "blocks": 2560, 00:15:11.853 "percent": 32 00:15:11.853 } 00:15:11.853 }, 00:15:11.853 "base_bdevs_list": [ 00:15:11.853 { 00:15:11.853 "name": "spare", 00:15:11.853 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:11.853 "is_configured": true, 00:15:11.853 "data_offset": 256, 00:15:11.853 "data_size": 7936 00:15:11.853 }, 00:15:11.853 { 00:15:11.853 "name": "BaseBdev2", 00:15:11.853 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:11.853 "is_configured": true, 00:15:11.853 "data_offset": 256, 00:15:11.853 "data_size": 7936 00:15:11.853 } 00:15:11.853 ] 00:15:11.853 }' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:11.853 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=585 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.853 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.853 "name": "raid_bdev1", 00:15:11.853 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:11.853 "strip_size_kb": 0, 00:15:11.853 "state": "online", 00:15:11.853 "raid_level": "raid1", 00:15:11.853 "superblock": true, 00:15:11.853 "num_base_bdevs": 2, 00:15:11.853 "num_base_bdevs_discovered": 2, 00:15:11.853 "num_base_bdevs_operational": 2, 00:15:11.853 "process": { 00:15:11.853 "type": "rebuild", 00:15:11.853 "target": "spare", 00:15:11.853 "progress": { 00:15:11.853 "blocks": 2560, 00:15:11.853 "percent": 32 00:15:11.853 } 00:15:11.853 }, 00:15:11.853 "base_bdevs_list": [ 00:15:11.853 { 00:15:11.854 "name": "spare", 00:15:11.854 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:11.854 "is_configured": true, 00:15:11.854 "data_offset": 256, 00:15:11.854 "data_size": 7936 00:15:11.854 }, 00:15:11.854 { 00:15:11.854 "name": "BaseBdev2", 00:15:11.854 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:11.854 "is_configured": true, 00:15:11.854 "data_offset": 256, 00:15:11.854 "data_size": 7936 00:15:11.854 } 00:15:11.854 ] 00:15:11.854 }' 00:15:11.854 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.854 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.854 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.111 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.111 09:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.044 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.044 "name": "raid_bdev1", 00:15:13.044 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:13.044 "strip_size_kb": 0, 00:15:13.044 "state": "online", 00:15:13.044 "raid_level": "raid1", 00:15:13.044 "superblock": true, 00:15:13.044 "num_base_bdevs": 2, 00:15:13.044 "num_base_bdevs_discovered": 2, 00:15:13.044 "num_base_bdevs_operational": 2, 00:15:13.044 "process": { 00:15:13.044 "type": "rebuild", 00:15:13.044 "target": "spare", 00:15:13.044 "progress": { 00:15:13.044 "blocks": 5376, 00:15:13.044 "percent": 67 00:15:13.044 } 00:15:13.044 }, 00:15:13.044 "base_bdevs_list": [ 00:15:13.044 { 00:15:13.044 "name": "spare", 00:15:13.044 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:13.045 "is_configured": true, 00:15:13.045 "data_offset": 256, 00:15:13.045 "data_size": 7936 00:15:13.045 }, 00:15:13.045 { 00:15:13.045 "name": "BaseBdev2", 00:15:13.045 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:13.045 "is_configured": true, 00:15:13.045 "data_offset": 256, 00:15:13.045 "data_size": 7936 00:15:13.045 } 00:15:13.045 ] 00:15:13.045 }' 00:15:13.045 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.045 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.045 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.045 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.045 09:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.977 [2024-10-30 09:49:52.400635] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:13.977 [2024-10-30 09:49:52.400691] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:13.977 [2024-10-30 09:49:52.400766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.977 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.237 "name": "raid_bdev1", 00:15:14.237 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:14.237 "strip_size_kb": 0, 00:15:14.237 "state": "online", 00:15:14.237 "raid_level": "raid1", 00:15:14.237 "superblock": true, 00:15:14.237 "num_base_bdevs": 2, 00:15:14.237 "num_base_bdevs_discovered": 2, 00:15:14.237 "num_base_bdevs_operational": 2, 00:15:14.237 "base_bdevs_list": [ 00:15:14.237 { 00:15:14.237 "name": "spare", 00:15:14.237 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": "BaseBdev2", 00:15:14.237 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 } 00:15:14.237 ] 00:15:14.237 }' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.237 "name": "raid_bdev1", 00:15:14.237 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:14.237 "strip_size_kb": 0, 00:15:14.237 "state": "online", 00:15:14.237 "raid_level": "raid1", 00:15:14.237 "superblock": true, 00:15:14.237 "num_base_bdevs": 2, 00:15:14.237 "num_base_bdevs_discovered": 2, 00:15:14.237 "num_base_bdevs_operational": 2, 00:15:14.237 "base_bdevs_list": [ 00:15:14.237 { 00:15:14.237 "name": "spare", 00:15:14.237 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": "BaseBdev2", 00:15:14.237 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 } 00:15:14.237 ] 00:15:14.237 }' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.237 "name": "raid_bdev1", 00:15:14.237 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:14.237 "strip_size_kb": 0, 00:15:14.237 "state": "online", 00:15:14.237 "raid_level": "raid1", 00:15:14.237 "superblock": true, 00:15:14.237 "num_base_bdevs": 2, 00:15:14.237 "num_base_bdevs_discovered": 2, 00:15:14.237 "num_base_bdevs_operational": 2, 00:15:14.237 "base_bdevs_list": [ 00:15:14.237 { 00:15:14.237 "name": "spare", 00:15:14.237 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": "BaseBdev2", 00:15:14.237 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 256, 00:15:14.237 "data_size": 7936 00:15:14.237 } 00:15:14.237 ] 00:15:14.237 }' 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.237 09:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.504 [2024-10-30 09:49:53.074262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.504 [2024-10-30 09:49:53.074290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.504 [2024-10-30 09:49:53.074349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.504 [2024-10-30 09:49:53.074404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.504 [2024-10-30 09:49:53.074413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.504 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.505 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.505 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.505 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.505 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.505 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.505 [2024-10-30 09:49:53.122261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.505 [2024-10-30 09:49:53.122302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.505 [2024-10-30 09:49:53.122317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:14.505 [2024-10-30 09:49:53.122325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.505 [2024-10-30 09:49:53.123878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.505 [2024-10-30 09:49:53.123906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.505 [2024-10-30 09:49:53.123947] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.505 [2024-10-30 09:49:53.123985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.505 [2024-10-30 09:49:53.124071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.762 spare 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.762 [2024-10-30 09:49:53.224137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:14.762 [2024-10-30 09:49:53.224162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:14.762 [2024-10-30 09:49:53.224230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:14.762 [2024-10-30 09:49:53.224288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:14.762 [2024-10-30 09:49:53.224295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:14.762 [2024-10-30 09:49:53.224359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.762 "name": "raid_bdev1", 00:15:14.762 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:14.762 "strip_size_kb": 0, 00:15:14.762 "state": "online", 00:15:14.762 "raid_level": "raid1", 00:15:14.762 "superblock": true, 00:15:14.762 "num_base_bdevs": 2, 00:15:14.762 "num_base_bdevs_discovered": 2, 00:15:14.762 "num_base_bdevs_operational": 2, 00:15:14.762 "base_bdevs_list": [ 00:15:14.762 { 00:15:14.762 "name": "spare", 00:15:14.762 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:14.762 "is_configured": true, 00:15:14.762 "data_offset": 256, 00:15:14.762 "data_size": 7936 00:15:14.762 }, 00:15:14.762 { 00:15:14.762 "name": "BaseBdev2", 00:15:14.762 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:14.762 "is_configured": true, 00:15:14.762 "data_offset": 256, 00:15:14.762 "data_size": 7936 00:15:14.762 } 00:15:14.762 ] 00:15:14.762 }' 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.762 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.021 "name": "raid_bdev1", 00:15:15.021 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:15.021 "strip_size_kb": 0, 00:15:15.021 "state": "online", 00:15:15.021 "raid_level": "raid1", 00:15:15.021 "superblock": true, 00:15:15.021 "num_base_bdevs": 2, 00:15:15.021 "num_base_bdevs_discovered": 2, 00:15:15.021 "num_base_bdevs_operational": 2, 00:15:15.021 "base_bdevs_list": [ 00:15:15.021 { 00:15:15.021 "name": "spare", 00:15:15.021 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:15.021 "is_configured": true, 00:15:15.021 "data_offset": 256, 00:15:15.021 "data_size": 7936 00:15:15.021 }, 00:15:15.021 { 00:15:15.021 "name": "BaseBdev2", 00:15:15.021 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:15.021 "is_configured": true, 00:15:15.021 "data_offset": 256, 00:15:15.021 "data_size": 7936 00:15:15.021 } 00:15:15.021 ] 00:15:15.021 }' 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.021 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.279 [2024-10-30 09:49:53.642412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.279 "name": "raid_bdev1", 00:15:15.279 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:15.279 "strip_size_kb": 0, 00:15:15.279 "state": "online", 00:15:15.279 "raid_level": "raid1", 00:15:15.279 "superblock": true, 00:15:15.279 "num_base_bdevs": 2, 00:15:15.279 "num_base_bdevs_discovered": 1, 00:15:15.279 "num_base_bdevs_operational": 1, 00:15:15.279 "base_bdevs_list": [ 00:15:15.279 { 00:15:15.279 "name": null, 00:15:15.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.279 "is_configured": false, 00:15:15.279 "data_offset": 0, 00:15:15.279 "data_size": 7936 00:15:15.279 }, 00:15:15.279 { 00:15:15.279 "name": "BaseBdev2", 00:15:15.279 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:15.279 "is_configured": true, 00:15:15.279 "data_offset": 256, 00:15:15.279 "data_size": 7936 00:15:15.279 } 00:15:15.279 ] 00:15:15.279 }' 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.279 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.537 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.537 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 [2024-10-30 09:49:53.954484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.537 [2024-10-30 09:49:53.954603] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.537 [2024-10-30 09:49:53.954615] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.537 [2024-10-30 09:49:53.954642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.537 [2024-10-30 09:49:53.963213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:15.537 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.537 09:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:15.537 [2024-10-30 09:49:53.964679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.473 09:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.473 "name": "raid_bdev1", 00:15:16.473 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:16.473 "strip_size_kb": 0, 00:15:16.473 "state": "online", 00:15:16.473 "raid_level": "raid1", 00:15:16.473 "superblock": true, 00:15:16.473 "num_base_bdevs": 2, 00:15:16.473 "num_base_bdevs_discovered": 2, 00:15:16.473 "num_base_bdevs_operational": 2, 00:15:16.473 "process": { 00:15:16.473 "type": "rebuild", 00:15:16.473 "target": "spare", 00:15:16.473 "progress": { 00:15:16.473 "blocks": 2560, 00:15:16.473 "percent": 32 00:15:16.473 } 00:15:16.473 }, 00:15:16.473 "base_bdevs_list": [ 00:15:16.473 { 00:15:16.473 "name": "spare", 00:15:16.473 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:16.473 "is_configured": true, 00:15:16.473 "data_offset": 256, 00:15:16.473 "data_size": 7936 00:15:16.473 }, 00:15:16.473 { 00:15:16.473 "name": "BaseBdev2", 00:15:16.473 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:16.473 "is_configured": true, 00:15:16.473 "data_offset": 256, 00:15:16.473 "data_size": 7936 00:15:16.473 } 00:15:16.473 ] 00:15:16.473 }' 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.473 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:16.473 [2024-10-30 09:49:55.070971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.733 [2024-10-30 09:49:55.169284] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.733 [2024-10-30 09:49:55.169335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.733 [2024-10-30 09:49:55.169346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.733 [2024-10-30 09:49:55.169353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.733 "name": "raid_bdev1", 00:15:16.733 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:16.733 "strip_size_kb": 0, 00:15:16.733 "state": "online", 00:15:16.733 "raid_level": "raid1", 00:15:16.733 "superblock": true, 00:15:16.733 "num_base_bdevs": 2, 00:15:16.733 "num_base_bdevs_discovered": 1, 00:15:16.733 "num_base_bdevs_operational": 1, 00:15:16.733 "base_bdevs_list": [ 00:15:16.733 { 00:15:16.733 "name": null, 00:15:16.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.733 "is_configured": false, 00:15:16.733 "data_offset": 0, 00:15:16.733 "data_size": 7936 00:15:16.733 }, 00:15:16.733 { 00:15:16.733 "name": "BaseBdev2", 00:15:16.733 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:16.733 "is_configured": true, 00:15:16.733 "data_offset": 256, 00:15:16.733 "data_size": 7936 00:15:16.733 } 00:15:16.733 ] 00:15:16.733 }' 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.733 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:16.991 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.992 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.992 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:16.992 [2024-10-30 09:49:55.482900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.992 [2024-10-30 09:49:55.482943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.992 [2024-10-30 09:49:55.482963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:16.992 [2024-10-30 09:49:55.482972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.992 [2024-10-30 09:49:55.483116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.992 [2024-10-30 09:49:55.483127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.992 [2024-10-30 09:49:55.483165] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:16.992 [2024-10-30 09:49:55.483182] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.992 [2024-10-30 09:49:55.483189] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.992 [2024-10-30 09:49:55.483207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.992 [2024-10-30 09:49:55.491437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:16.992 spare 00:15:16.992 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.992 09:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:16.992 [2024-10-30 09:49:55.492906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.927 "name": "raid_bdev1", 00:15:17.927 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:17.927 "strip_size_kb": 0, 00:15:17.927 "state": "online", 00:15:17.927 "raid_level": "raid1", 00:15:17.927 "superblock": true, 00:15:17.927 "num_base_bdevs": 2, 00:15:17.927 "num_base_bdevs_discovered": 2, 00:15:17.927 "num_base_bdevs_operational": 2, 00:15:17.927 "process": { 00:15:17.927 "type": "rebuild", 00:15:17.927 "target": "spare", 00:15:17.927 "progress": { 00:15:17.927 "blocks": 2560, 00:15:17.927 "percent": 32 00:15:17.927 } 00:15:17.927 }, 00:15:17.927 "base_bdevs_list": [ 00:15:17.927 { 00:15:17.927 "name": "spare", 00:15:17.927 "uuid": "127ec652-7077-54cc-9982-1a2710a87589", 00:15:17.927 "is_configured": true, 00:15:17.927 "data_offset": 256, 00:15:17.927 "data_size": 7936 00:15:17.927 }, 00:15:17.927 { 00:15:17.927 "name": "BaseBdev2", 00:15:17.927 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:17.927 "is_configured": true, 00:15:17.927 "data_offset": 256, 00:15:17.927 "data_size": 7936 00:15:17.927 } 00:15:17.927 ] 00:15:17.927 }' 00:15:17.927 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.185 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.186 [2024-10-30 09:49:56.599139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.186 [2024-10-30 09:49:56.697438] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.186 [2024-10-30 09:49:56.697484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.186 [2024-10-30 09:49:56.697497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.186 [2024-10-30 09:49:56.697502] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.186 "name": "raid_bdev1", 00:15:18.186 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:18.186 "strip_size_kb": 0, 00:15:18.186 "state": "online", 00:15:18.186 "raid_level": "raid1", 00:15:18.186 "superblock": true, 00:15:18.186 "num_base_bdevs": 2, 00:15:18.186 "num_base_bdevs_discovered": 1, 00:15:18.186 "num_base_bdevs_operational": 1, 00:15:18.186 "base_bdevs_list": [ 00:15:18.186 { 00:15:18.186 "name": null, 00:15:18.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.186 "is_configured": false, 00:15:18.186 "data_offset": 0, 00:15:18.186 "data_size": 7936 00:15:18.186 }, 00:15:18.186 { 00:15:18.186 "name": "BaseBdev2", 00:15:18.186 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:18.186 "is_configured": true, 00:15:18.186 "data_offset": 256, 00:15:18.186 "data_size": 7936 00:15:18.186 } 00:15:18.186 ] 00:15:18.186 }' 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.186 09:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.445 "name": "raid_bdev1", 00:15:18.445 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:18.445 "strip_size_kb": 0, 00:15:18.445 "state": "online", 00:15:18.445 "raid_level": "raid1", 00:15:18.445 "superblock": true, 00:15:18.445 "num_base_bdevs": 2, 00:15:18.445 "num_base_bdevs_discovered": 1, 00:15:18.445 "num_base_bdevs_operational": 1, 00:15:18.445 "base_bdevs_list": [ 00:15:18.445 { 00:15:18.445 "name": null, 00:15:18.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.445 "is_configured": false, 00:15:18.445 "data_offset": 0, 00:15:18.445 "data_size": 7936 00:15:18.445 }, 00:15:18.445 { 00:15:18.445 "name": "BaseBdev2", 00:15:18.445 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:18.445 "is_configured": true, 00:15:18.445 "data_offset": 256, 00:15:18.445 "data_size": 7936 00:15:18.445 } 00:15:18.445 ] 00:15:18.445 }' 00:15:18.445 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.703 [2024-10-30 09:49:57.123105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.703 [2024-10-30 09:49:57.123148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.703 [2024-10-30 09:49:57.123166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:18.703 [2024-10-30 09:49:57.123173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.703 [2024-10-30 09:49:57.123298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.703 [2024-10-30 09:49:57.123307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.703 [2024-10-30 09:49:57.123345] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:18.703 [2024-10-30 09:49:57.123355] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.703 [2024-10-30 09:49:57.123362] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.703 [2024-10-30 09:49:57.123371] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:18.703 BaseBdev1 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.703 09:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.639 "name": "raid_bdev1", 00:15:19.639 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:19.639 "strip_size_kb": 0, 00:15:19.639 "state": "online", 00:15:19.639 "raid_level": "raid1", 00:15:19.639 "superblock": true, 00:15:19.639 "num_base_bdevs": 2, 00:15:19.639 "num_base_bdevs_discovered": 1, 00:15:19.639 "num_base_bdevs_operational": 1, 00:15:19.639 "base_bdevs_list": [ 00:15:19.639 { 00:15:19.639 "name": null, 00:15:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.639 "is_configured": false, 00:15:19.639 "data_offset": 0, 00:15:19.639 "data_size": 7936 00:15:19.639 }, 00:15:19.639 { 00:15:19.639 "name": "BaseBdev2", 00:15:19.639 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:19.639 "is_configured": true, 00:15:19.639 "data_offset": 256, 00:15:19.639 "data_size": 7936 00:15:19.639 } 00:15:19.639 ] 00:15:19.639 }' 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.639 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.898 "name": "raid_bdev1", 00:15:19.898 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:19.898 "strip_size_kb": 0, 00:15:19.898 "state": "online", 00:15:19.898 "raid_level": "raid1", 00:15:19.898 "superblock": true, 00:15:19.898 "num_base_bdevs": 2, 00:15:19.898 "num_base_bdevs_discovered": 1, 00:15:19.898 "num_base_bdevs_operational": 1, 00:15:19.898 "base_bdevs_list": [ 00:15:19.898 { 00:15:19.898 "name": null, 00:15:19.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.898 "is_configured": false, 00:15:19.898 "data_offset": 0, 00:15:19.898 "data_size": 7936 00:15:19.898 }, 00:15:19.898 { 00:15:19.898 "name": "BaseBdev2", 00:15:19.898 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:19.898 "is_configured": true, 00:15:19.898 "data_offset": 256, 00:15:19.898 "data_size": 7936 00:15:19.898 } 00:15:19.898 ] 00:15:19.898 }' 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 [2024-10-30 09:49:58.507381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.898 [2024-10-30 09:49:58.507496] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.898 [2024-10-30 09:49:58.507513] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.898 request: 00:15:19.898 { 00:15:19.898 "base_bdev": "BaseBdev1", 00:15:19.898 "raid_bdev": "raid_bdev1", 00:15:19.898 "method": "bdev_raid_add_base_bdev", 00:15:19.898 "req_id": 1 00:15:19.898 } 00:15:19.898 Got JSON-RPC error response 00:15:19.898 response: 00:15:19.898 { 00:15:19.898 "code": -22, 00:15:19.898 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:19.898 } 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.898 09:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.272 "name": "raid_bdev1", 00:15:21.272 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:21.272 "strip_size_kb": 0, 00:15:21.272 "state": "online", 00:15:21.272 "raid_level": "raid1", 00:15:21.272 "superblock": true, 00:15:21.272 "num_base_bdevs": 2, 00:15:21.272 "num_base_bdevs_discovered": 1, 00:15:21.272 "num_base_bdevs_operational": 1, 00:15:21.272 "base_bdevs_list": [ 00:15:21.272 { 00:15:21.272 "name": null, 00:15:21.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.272 "is_configured": false, 00:15:21.272 "data_offset": 0, 00:15:21.272 "data_size": 7936 00:15:21.272 }, 00:15:21.272 { 00:15:21.272 "name": "BaseBdev2", 00:15:21.272 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:21.272 "is_configured": true, 00:15:21.272 "data_offset": 256, 00:15:21.272 "data_size": 7936 00:15:21.272 } 00:15:21.272 ] 00:15:21.272 }' 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.272 "name": "raid_bdev1", 00:15:21.272 "uuid": "fb92e769-bc26-417f-8407-c05c6cc4967e", 00:15:21.272 "strip_size_kb": 0, 00:15:21.272 "state": "online", 00:15:21.272 "raid_level": "raid1", 00:15:21.272 "superblock": true, 00:15:21.272 "num_base_bdevs": 2, 00:15:21.272 "num_base_bdevs_discovered": 1, 00:15:21.272 "num_base_bdevs_operational": 1, 00:15:21.272 "base_bdevs_list": [ 00:15:21.272 { 00:15:21.272 "name": null, 00:15:21.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.272 "is_configured": false, 00:15:21.272 "data_offset": 0, 00:15:21.272 "data_size": 7936 00:15:21.272 }, 00:15:21.272 { 00:15:21.272 "name": "BaseBdev2", 00:15:21.272 "uuid": "35f9e0e9-8de7-5e7e-b73b-63841dbe420f", 00:15:21.272 "is_configured": true, 00:15:21.272 "data_offset": 256, 00:15:21.272 "data_size": 7936 00:15:21.272 } 00:15:21.272 ] 00:15:21.272 }' 00:15:21.272 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.273 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.273 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86471 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86471 ']' 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86471 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86471 00:15:21.529 killing process with pid 86471 00:15:21.529 Received shutdown signal, test time was about 60.000000 seconds 00:15:21.529 00:15:21.529 Latency(us) 00:15:21.529 [2024-10-30T09:50:00.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.529 [2024-10-30T09:50:00.149Z] =================================================================================================================== 00:15:21.529 [2024-10-30T09:50:00.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86471' 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 86471 00:15:21.529 [2024-10-30 09:49:59.920908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.529 09:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 86471 00:15:21.529 [2024-10-30 09:49:59.921005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.529 [2024-10-30 09:49:59.921040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.529 [2024-10-30 09:49:59.921049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:21.529 [2024-10-30 09:50:00.062959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.095 ************************************ 00:15:22.095 END TEST raid_rebuild_test_sb_md_interleaved 00:15:22.095 ************************************ 00:15:22.095 09:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.095 00:15:22.095 real 0m14.695s 00:15:22.095 user 0m18.568s 00:15:22.095 sys 0m1.001s 00:15:22.095 09:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.095 09:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:22.095 09:50:00 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:15:22.095 09:50:00 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:15:22.095 09:50:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86471 ']' 00:15:22.095 09:50:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86471 00:15:22.096 09:50:00 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:15:22.096 ************************************ 00:15:22.096 END TEST bdev_raid 00:15:22.096 ************************************ 00:15:22.096 00:15:22.096 real 9m25.271s 00:15:22.096 user 12m34.865s 00:15:22.096 sys 1m16.126s 00:15:22.096 09:50:00 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.096 09:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 09:50:00 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:22.096 09:50:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:22.096 09:50:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.096 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 ************************************ 00:15:22.096 START TEST spdkcli_raid 00:15:22.096 ************************************ 00:15:22.096 09:50:00 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:22.354 * Looking for test storage... 00:15:22.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:22.354 09:50:00 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:22.354 09:50:00 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:22.354 09:50:00 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:15:22.354 09:50:00 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.354 09:50:00 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.355 09:50:00 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.355 --rc genhtml_branch_coverage=1 00:15:22.355 --rc genhtml_function_coverage=1 00:15:22.355 --rc genhtml_legend=1 00:15:22.355 --rc geninfo_all_blocks=1 00:15:22.355 --rc geninfo_unexecuted_blocks=1 00:15:22.355 00:15:22.355 ' 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.355 --rc genhtml_branch_coverage=1 00:15:22.355 --rc genhtml_function_coverage=1 00:15:22.355 --rc genhtml_legend=1 00:15:22.355 --rc geninfo_all_blocks=1 00:15:22.355 --rc geninfo_unexecuted_blocks=1 00:15:22.355 00:15:22.355 ' 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.355 --rc genhtml_branch_coverage=1 00:15:22.355 --rc genhtml_function_coverage=1 00:15:22.355 --rc genhtml_legend=1 00:15:22.355 --rc geninfo_all_blocks=1 00:15:22.355 --rc geninfo_unexecuted_blocks=1 00:15:22.355 00:15:22.355 ' 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.355 --rc genhtml_branch_coverage=1 00:15:22.355 --rc genhtml_function_coverage=1 00:15:22.355 --rc genhtml_legend=1 00:15:22.355 --rc geninfo_all_blocks=1 00:15:22.355 --rc geninfo_unexecuted_blocks=1 00:15:22.355 00:15:22.355 ' 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:22.355 09:50:00 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87125 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:22.355 09:50:00 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87125 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 87125 ']' 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.355 09:50:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.355 [2024-10-30 09:50:00.899740] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:22.355 [2024-10-30 09:50:00.900346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87125 ] 00:15:22.612 [2024-10-30 09:50:01.061248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:22.612 [2024-10-30 09:50:01.156862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.612 [2024-10-30 09:50:01.156879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:15:23.178 09:50:01 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.178 09:50:01 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:23.178 09:50:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.178 09:50:01 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:15:23.178 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:15:23.178 ' 00:15:25.079 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:15:25.079 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:15:25.079 09:50:03 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:15:25.079 09:50:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.079 09:50:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 09:50:03 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:15:25.079 09:50:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.079 09:50:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 09:50:03 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:15:25.079 ' 00:15:26.012 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:15:26.012 09:50:04 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:15:26.012 09:50:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.012 09:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.012 09:50:04 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:15:26.012 09:50:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.012 09:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.012 09:50:04 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:15:26.012 09:50:04 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:15:26.579 09:50:04 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:15:26.579 09:50:04 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:15:26.579 09:50:04 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:15:26.579 09:50:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.579 09:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.579 09:50:04 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:15:26.579 09:50:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.579 09:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.579 09:50:04 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:15:26.579 ' 00:15:27.514 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:15:27.514 09:50:06 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:15:27.514 09:50:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:27.514 09:50:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.514 09:50:06 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:15:27.514 09:50:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.514 09:50:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.514 09:50:06 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:15:27.514 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:15:27.514 ' 00:15:28.885 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:15:28.885 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:15:28.885 09:50:07 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:15:28.885 09:50:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.886 09:50:07 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87125 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87125 ']' 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87125 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.886 09:50:07 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87125 00:15:29.144 09:50:07 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.144 09:50:07 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.144 09:50:07 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87125' 00:15:29.144 killing process with pid 87125 00:15:29.144 09:50:07 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 87125 00:15:29.144 09:50:07 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 87125 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87125 ']' 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87125 00:15:30.078 09:50:08 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87125 ']' 00:15:30.078 09:50:08 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87125 00:15:30.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (87125) - No such process 00:15:30.078 09:50:08 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 87125 is not found' 00:15:30.078 Process with pid 87125 is not found 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:15:30.078 09:50:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:15:30.078 ************************************ 00:15:30.078 END TEST spdkcli_raid 00:15:30.078 ************************************ 00:15:30.078 00:15:30.078 real 0m7.990s 00:15:30.078 user 0m16.665s 00:15:30.078 sys 0m0.692s 00:15:30.078 09:50:08 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:30.078 09:50:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.336 09:50:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:15:30.336 09:50:08 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:30.336 09:50:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:30.336 09:50:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.336 ************************************ 00:15:30.336 START TEST blockdev_raid5f 00:15:30.336 ************************************ 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:15:30.336 * Looking for test storage... 00:15:30.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.336 09:50:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.336 --rc genhtml_branch_coverage=1 00:15:30.336 --rc genhtml_function_coverage=1 00:15:30.336 --rc genhtml_legend=1 00:15:30.336 --rc geninfo_all_blocks=1 00:15:30.336 --rc geninfo_unexecuted_blocks=1 00:15:30.336 00:15:30.336 ' 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.336 --rc genhtml_branch_coverage=1 00:15:30.336 --rc genhtml_function_coverage=1 00:15:30.336 --rc genhtml_legend=1 00:15:30.336 --rc geninfo_all_blocks=1 00:15:30.336 --rc geninfo_unexecuted_blocks=1 00:15:30.336 00:15:30.336 ' 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.336 --rc genhtml_branch_coverage=1 00:15:30.336 --rc genhtml_function_coverage=1 00:15:30.336 --rc genhtml_legend=1 00:15:30.336 --rc geninfo_all_blocks=1 00:15:30.336 --rc geninfo_unexecuted_blocks=1 00:15:30.336 00:15:30.336 ' 00:15:30.336 09:50:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.336 --rc genhtml_branch_coverage=1 00:15:30.336 --rc genhtml_function_coverage=1 00:15:30.336 --rc genhtml_legend=1 00:15:30.336 --rc geninfo_all_blocks=1 00:15:30.336 --rc geninfo_unexecuted_blocks=1 00:15:30.336 00:15:30.336 ' 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:15:30.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:30.336 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87377 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87377 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 87377 ']' 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.337 09:50:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:30.337 09:50:08 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:30.337 [2024-10-30 09:50:08.930134] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:30.337 [2024-10-30 09:50:08.930392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87377 ] 00:15:30.595 [2024-10-30 09:50:09.091598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.595 [2024-10-30 09:50:09.184529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.162 09:50:09 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:31.162 09:50:09 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:15:31.162 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:31.162 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:15:31.162 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:15:31.162 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.162 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 Malloc0 00:15:31.420 Malloc1 00:15:31.420 Malloc2 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3c396a35-1a34-4851-80af-4e71d6df6a83"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3c396a35-1a34-4851-80af-4e71d6df6a83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3c396a35-1a34-4851-80af-4e71d6df6a83",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "69873f4c-d7c6-476e-acba-4714be20bef8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bc1f6fca-32f6-416d-8198-536d9c4729c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a57802fb-cf69-4aba-ac80-8450b38bcb1e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:31.420 09:50:09 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 87377 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 87377 ']' 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 87377 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.420 09:50:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87377 00:15:31.420 killing process with pid 87377 00:15:31.420 09:50:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.420 09:50:10 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.420 09:50:10 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87377' 00:15:31.420 09:50:10 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 87377 00:15:31.420 09:50:10 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 87377 00:15:33.322 09:50:11 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:33.322 09:50:11 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:15:33.322 09:50:11 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:33.322 09:50:11 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:33.322 09:50:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:33.322 ************************************ 00:15:33.322 START TEST bdev_hello_world 00:15:33.322 ************************************ 00:15:33.322 09:50:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:15:33.322 [2024-10-30 09:50:11.702092] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:33.322 [2024-10-30 09:50:11.702367] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87432 ] 00:15:33.322 [2024-10-30 09:50:11.860321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.581 [2024-10-30 09:50:11.952485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.840 [2024-10-30 09:50:12.331348] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:33.840 [2024-10-30 09:50:12.331396] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:15:33.840 [2024-10-30 09:50:12.331411] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:33.840 [2024-10-30 09:50:12.331863] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:33.840 [2024-10-30 09:50:12.331968] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:33.840 [2024-10-30 09:50:12.331982] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:33.840 [2024-10-30 09:50:12.332032] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:33.840 00:15:33.840 [2024-10-30 09:50:12.332048] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:34.778 ************************************ 00:15:34.778 00:15:34.778 real 0m1.544s 00:15:34.778 user 0m1.249s 00:15:34.778 sys 0m0.178s 00:15:34.778 09:50:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:34.778 09:50:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 END TEST bdev_hello_world 00:15:34.778 ************************************ 00:15:34.778 09:50:13 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:34.778 09:50:13 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:34.778 09:50:13 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:34.778 09:50:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 ************************************ 00:15:34.778 START TEST bdev_bounds 00:15:34.778 ************************************ 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87470 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.778 Process bdevio pid: 87470 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87470' 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87470 00:15:34.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 87470 ']' 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.778 09:50:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 [2024-10-30 09:50:13.278938] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:34.778 [2024-10-30 09:50:13.279154] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87470 ] 00:15:35.058 [2024-10-30 09:50:13.432213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.058 [2024-10-30 09:50:13.529485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.058 [2024-10-30 09:50:13.529791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.058 [2024-10-30 09:50:13.529808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.646 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:35.646 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:15:35.646 09:50:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:35.646 I/O targets: 00:15:35.646 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:15:35.646 00:15:35.646 00:15:35.646 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.646 http://cunit.sourceforge.net/ 00:15:35.646 00:15:35.646 00:15:35.646 Suite: bdevio tests on: raid5f 00:15:35.646 Test: blockdev write read block ...passed 00:15:35.646 Test: blockdev write zeroes read block ...passed 00:15:35.646 Test: blockdev write zeroes read no split ...passed 00:15:35.905 Test: blockdev write zeroes read split ...passed 00:15:35.905 Test: blockdev write zeroes read split partial ...passed 00:15:35.905 Test: blockdev reset ...passed 00:15:35.905 Test: blockdev write read 8 blocks ...passed 00:15:35.905 Test: blockdev write read size > 128k ...passed 00:15:35.905 Test: blockdev write read invalid size ...passed 00:15:35.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.905 Test: blockdev write read max offset ...passed 00:15:35.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.905 Test: blockdev writev readv 8 blocks ...passed 00:15:35.905 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.905 Test: blockdev writev readv block ...passed 00:15:35.905 Test: blockdev writev readv size > 128k ...passed 00:15:35.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.905 Test: blockdev comparev and writev ...passed 00:15:35.905 Test: blockdev nvme passthru rw ...passed 00:15:35.905 Test: blockdev nvme passthru vendor specific ...passed 00:15:35.905 Test: blockdev nvme admin passthru ...passed 00:15:35.905 Test: blockdev copy ...passed 00:15:35.905 00:15:35.905 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.905 suites 1 1 n/a 0 0 00:15:35.905 tests 23 23 23 0 0 00:15:35.905 asserts 130 130 130 0 n/a 00:15:35.905 00:15:35.905 Elapsed time = 0.443 seconds 00:15:35.905 0 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87470 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 87470 ']' 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 87470 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87470 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87470' 00:15:35.905 killing process with pid 87470 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 87470 00:15:35.905 09:50:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 87470 00:15:36.840 09:50:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:36.840 00:15:36.840 real 0m2.034s 00:15:36.840 user 0m5.193s 00:15:36.840 sys 0m0.244s 00:15:36.840 09:50:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.840 09:50:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:36.840 ************************************ 00:15:36.840 END TEST bdev_bounds 00:15:36.840 ************************************ 00:15:36.840 09:50:15 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:15:36.840 09:50:15 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:36.840 09:50:15 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.840 09:50:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:36.840 ************************************ 00:15:36.840 START TEST bdev_nbd 00:15:36.840 ************************************ 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:36.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87524 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87524 /var/tmp/spdk-nbd.sock 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 87524 ']' 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:36.840 09:50:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:36.840 [2024-10-30 09:50:15.370927] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:36.840 [2024-10-30 09:50:15.371173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.098 [2024-10-30 09:50:15.517126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.098 [2024-10-30 09:50:15.592276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:15:37.666 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.925 1+0 records in 00:15:37.925 1+0 records out 00:15:37.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494679 s, 8.3 MB/s 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:15:37.925 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:38.185 { 00:15:38.185 "nbd_device": "/dev/nbd0", 00:15:38.185 "bdev_name": "raid5f" 00:15:38.185 } 00:15:38.185 ]' 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:38.185 { 00:15:38.185 "nbd_device": "/dev/nbd0", 00:15:38.185 "bdev_name": "raid5f" 00:15:38.185 } 00:15:38.185 ]' 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.185 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.444 09:50:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.444 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:38.444 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:38.444 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:15:38.702 /dev/nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:38.702 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.702 1+0 records in 00:15:38.702 1+0 records out 00:15:38.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265635 s, 15.4 MB/s 00:15:38.960 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:38.961 { 00:15:38.961 "nbd_device": "/dev/nbd0", 00:15:38.961 "bdev_name": "raid5f" 00:15:38.961 } 00:15:38.961 ]' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:38.961 { 00:15:38.961 "nbd_device": "/dev/nbd0", 00:15:38.961 "bdev_name": "raid5f" 00:15:38.961 } 00:15:38.961 ]' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:38.961 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:39.220 256+0 records in 00:15:39.220 256+0 records out 00:15:39.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113074 s, 92.7 MB/s 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:39.220 256+0 records in 00:15:39.220 256+0 records out 00:15:39.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198986 s, 52.7 MB/s 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.220 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.478 09:50:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.478 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:39.736 malloc_lvol_verify 00:15:39.736 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:39.994 01c0f4d6-39f4-440d-b7ac-3a55f28ffc5f 00:15:39.994 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:40.252 74c36669-a7a0-4abd-98de-ef61c2131e64 00:15:40.252 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:40.252 /dev/nbd0 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:40.509 mke2fs 1.47.0 (5-Feb-2023) 00:15:40.509 Discarding device blocks: 0/4096 done 00:15:40.509 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:40.509 00:15:40.509 Allocating group tables: 0/1 done 00:15:40.509 Writing inode tables: 0/1 done 00:15:40.509 Creating journal (1024 blocks): done 00:15:40.509 Writing superblocks and filesystem accounting information: 0/1 done 00:15:40.509 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.509 09:50:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87524 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 87524 ']' 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 87524 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.509 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87524 00:15:40.767 killing process with pid 87524 00:15:40.767 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:40.767 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:40.767 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87524' 00:15:40.767 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 87524 00:15:40.767 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 87524 00:15:41.330 09:50:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:41.330 00:15:41.330 real 0m4.560s 00:15:41.330 user 0m6.619s 00:15:41.330 sys 0m0.933s 00:15:41.330 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:41.330 09:50:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:41.330 ************************************ 00:15:41.330 END TEST bdev_nbd 00:15:41.330 ************************************ 00:15:41.330 09:50:19 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:41.330 09:50:19 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:15:41.330 09:50:19 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:15:41.330 09:50:19 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:41.330 09:50:19 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:41.330 09:50:19 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:41.330 09:50:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:41.330 ************************************ 00:15:41.330 START TEST bdev_fio 00:15:41.330 ************************************ 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:41.331 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:15:41.331 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:15:41.587 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:41.588 ************************************ 00:15:41.588 START TEST bdev_fio_rw_verify 00:15:41.588 ************************************ 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.588 09:50:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:41.588 09:50:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.588 09:50:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.588 09:50:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:15:41.588 09:50:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.588 09:50:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.588 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:41.588 fio-3.35 00:15:41.588 Starting 1 thread 00:15:53.856 00:15:53.856 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87708: Wed Oct 30 09:50:30 2024 00:15:53.856 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(506MiB/10001msec) 00:15:53.856 slat (usec): min=17, max=127, avg=18.59, stdev= 2.07 00:15:53.856 clat (usec): min=8, max=302, avg=125.87, stdev=45.19 00:15:53.856 lat (usec): min=26, max=321, avg=144.46, stdev=45.67 00:15:53.856 clat percentiles (usec): 00:15:53.856 | 50.000th=[ 130], 99.000th=[ 241], 99.900th=[ 251], 99.990th=[ 269], 00:15:53.856 | 99.999th=[ 297] 00:15:53.856 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(522MiB/9884msec); 0 zone resets 00:15:53.856 slat (usec): min=7, max=168, avg=15.72, stdev= 2.35 00:15:53.856 clat (usec): min=52, max=962, avg=282.10, stdev=42.19 00:15:53.856 lat (usec): min=66, max=1109, avg=297.82, stdev=43.28 00:15:53.856 clat percentiles (usec): 00:15:53.856 | 50.000th=[ 285], 99.000th=[ 404], 99.900th=[ 424], 99.990th=[ 791], 00:15:53.856 | 99.999th=[ 955] 00:15:53.856 bw ( KiB/s): min=42296, max=58688, per=99.05%, avg=53561.26, stdev=4526.36, samples=19 00:15:53.856 iops : min=10574, max=14672, avg=13390.32, stdev=1131.59, samples=19 00:15:53.856 lat (usec) : 10=0.01%, 50=0.01%, 100=17.45%, 250=43.16%, 500=39.38% 00:15:53.856 lat (usec) : 750=0.01%, 1000=0.01% 00:15:53.856 cpu : usr=99.26%, sys=0.23%, ctx=28, majf=0, minf=10513 00:15:53.856 IO depths : 1=7.6%, 2=20.0%, 4=55.0%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.856 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.856 issued rwts: total=129547,133625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.856 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:53.856 00:15:53.856 Run status group 0 (all jobs): 00:15:53.856 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=506MiB (531MB), run=10001-10001msec 00:15:53.856 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=522MiB (547MB), run=9884-9884msec 00:15:53.856 ----------------------------------------------------- 00:15:53.856 Suppressions used: 00:15:53.856 count bytes template 00:15:53.856 1 7 /usr/src/fio/parse.c 00:15:53.856 60 5760 /usr/src/fio/iolog.c 00:15:53.856 1 8 libtcmalloc_minimal.so 00:15:53.856 1 904 libcrypto.so 00:15:53.856 ----------------------------------------------------- 00:15:53.856 00:15:53.856 00:15:53.856 real 0m11.848s 00:15:53.856 user 0m12.210s 00:15:53.856 sys 0m0.730s 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:53.856 ************************************ 00:15:53.856 END TEST bdev_fio_rw_verify 00:15:53.856 ************************************ 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:53.856 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3c396a35-1a34-4851-80af-4e71d6df6a83"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3c396a35-1a34-4851-80af-4e71d6df6a83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3c396a35-1a34-4851-80af-4e71d6df6a83",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "69873f4c-d7c6-476e-acba-4714be20bef8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bc1f6fca-32f6-416d-8198-536d9c4729c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a57802fb-cf69-4aba-ac80-8450b38bcb1e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.857 /home/vagrant/spdk_repo/spdk 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:53.857 00:15:53.857 real 0m12.019s 00:15:53.857 user 0m12.272s 00:15:53.857 sys 0m0.815s 00:15:53.857 ************************************ 00:15:53.857 END TEST bdev_fio 00:15:53.857 ************************************ 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.857 09:50:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:53.857 09:50:31 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.857 09:50:31 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.857 09:50:31 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:53.857 09:50:31 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.857 09:50:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:53.857 ************************************ 00:15:53.857 START TEST bdev_verify 00:15:53.857 ************************************ 00:15:53.857 09:50:31 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.857 [2024-10-30 09:50:32.036738] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:53.857 [2024-10-30 09:50:32.036853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87866 ] 00:15:53.857 [2024-10-30 09:50:32.191306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.857 [2024-10-30 09:50:32.266891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.857 [2024-10-30 09:50:32.266974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.114 Running I/O for 5 seconds... 00:15:56.423 19658.00 IOPS, 76.79 MiB/s [2024-10-30T09:50:35.610Z] 21125.00 IOPS, 82.52 MiB/s [2024-10-30T09:50:36.985Z] 21581.67 IOPS, 84.30 MiB/s [2024-10-30T09:50:37.926Z] 21862.50 IOPS, 85.40 MiB/s [2024-10-30T09:50:37.926Z] 21992.00 IOPS, 85.91 MiB/s 00:15:59.306 Latency(us) 00:15:59.306 [2024-10-30T09:50:37.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.306 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:59.306 Verification LBA range: start 0x0 length 0x2000 00:15:59.306 raid5f : 5.01 11142.68 43.53 0.00 0.00 17362.16 163.05 15123.69 00:15:59.306 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.306 Verification LBA range: start 0x2000 length 0x2000 00:15:59.306 raid5f : 5.01 10835.04 42.32 0.00 0.00 17532.52 164.63 18249.26 00:15:59.306 [2024-10-30T09:50:37.926Z] =================================================================================================================== 00:15:59.306 [2024-10-30T09:50:37.926Z] Total : 21977.73 85.85 0.00 0.00 17446.11 163.05 18249.26 00:15:59.885 00:15:59.885 real 0m6.306s 00:15:59.885 user 0m11.839s 00:15:59.885 sys 0m0.170s 00:15:59.885 09:50:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:59.885 ************************************ 00:15:59.885 END TEST bdev_verify 00:15:59.885 ************************************ 00:15:59.885 09:50:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:59.885 09:50:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:59.885 09:50:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:59.885 09:50:38 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:59.885 09:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:59.885 ************************************ 00:15:59.885 START TEST bdev_verify_big_io 00:15:59.885 ************************************ 00:15:59.885 09:50:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:59.885 [2024-10-30 09:50:38.380921] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:15:59.885 [2024-10-30 09:50:38.381018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:16:00.143 [2024-10-30 09:50:38.531305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.143 [2024-10-30 09:50:38.610751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.143 [2024-10-30 09:50:38.610837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.401 Running I/O for 5 seconds... 00:16:02.340 1014.00 IOPS, 63.38 MiB/s [2024-10-30T09:50:42.334Z] 1141.00 IOPS, 71.31 MiB/s [2024-10-30T09:50:43.268Z] 1184.00 IOPS, 74.00 MiB/s [2024-10-30T09:50:44.198Z] 1189.25 IOPS, 74.33 MiB/s [2024-10-30T09:50:44.198Z] 1205.40 IOPS, 75.34 MiB/s 00:16:05.578 Latency(us) 00:16:05.578 [2024-10-30T09:50:44.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.578 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:05.578 Verification LBA range: start 0x0 length 0x200 00:16:05.578 raid5f : 5.24 606.07 37.88 0.00 0.00 5225331.17 125.24 229073.53 00:16:05.578 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:05.578 Verification LBA range: start 0x200 length 0x200 00:16:05.578 raid5f : 5.22 607.73 37.98 0.00 0.00 5165693.36 151.24 227460.33 00:16:05.578 [2024-10-30T09:50:44.198Z] =================================================================================================================== 00:16:05.578 [2024-10-30T09:50:44.198Z] Total : 1213.80 75.86 0.00 0.00 5195512.27 125.24 229073.53 00:16:06.513 00:16:06.513 real 0m6.542s 00:16:06.513 user 0m12.312s 00:16:06.513 sys 0m0.178s 00:16:06.513 09:50:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.513 ************************************ 00:16:06.513 END TEST bdev_verify_big_io 00:16:06.513 ************************************ 00:16:06.513 09:50:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.513 09:50:44 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.513 09:50:44 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:06.513 09:50:44 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.513 09:50:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:06.513 ************************************ 00:16:06.513 START TEST bdev_write_zeroes 00:16:06.513 ************************************ 00:16:06.513 09:50:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.513 [2024-10-30 09:50:44.983970] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:16:06.513 [2024-10-30 09:50:44.984106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88046 ] 00:16:06.771 [2024-10-30 09:50:45.144913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.771 [2024-10-30 09:50:45.228337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.029 Running I/O for 1 seconds... 00:16:07.963 30207.00 IOPS, 118.00 MiB/s 00:16:07.963 Latency(us) 00:16:07.963 [2024-10-30T09:50:46.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.963 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:07.963 raid5f : 1.01 30171.23 117.86 0.00 0.00 4230.91 1172.09 5772.21 00:16:07.963 [2024-10-30T09:50:46.583Z] =================================================================================================================== 00:16:07.963 [2024-10-30T09:50:46.583Z] Total : 30171.23 117.86 0.00 0.00 4230.91 1172.09 5772.21 00:16:08.924 00:16:08.924 real 0m2.309s 00:16:08.924 user 0m2.019s 00:16:08.924 sys 0m0.171s 00:16:08.924 09:50:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.924 ************************************ 00:16:08.924 END TEST bdev_write_zeroes 00:16:08.924 ************************************ 00:16:08.924 09:50:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:08.924 09:50:47 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.924 09:50:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:08.924 09:50:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.924 09:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:08.924 ************************************ 00:16:08.924 START TEST bdev_json_nonenclosed 00:16:08.924 ************************************ 00:16:08.924 09:50:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.924 [2024-10-30 09:50:47.336611] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:16:08.924 [2024-10-30 09:50:47.336699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88088 ] 00:16:08.924 [2024-10-30 09:50:47.486880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.185 [2024-10-30 09:50:47.565191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.185 [2024-10-30 09:50:47.565256] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:09.185 [2024-10-30 09:50:47.565273] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:09.185 [2024-10-30 09:50:47.565280] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:09.185 00:16:09.185 real 0m0.414s 00:16:09.185 user 0m0.229s 00:16:09.185 sys 0m0.082s 00:16:09.185 09:50:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.185 ************************************ 00:16:09.185 END TEST bdev_json_nonenclosed 00:16:09.185 ************************************ 00:16:09.185 09:50:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:09.185 09:50:47 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.185 09:50:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:09.185 09:50:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:09.185 09:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:09.185 ************************************ 00:16:09.185 START TEST bdev_json_nonarray 00:16:09.185 ************************************ 00:16:09.185 09:50:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.447 [2024-10-30 09:50:47.816410] Starting SPDK v25.01-pre git sha1 bfbfb6d81 / DPDK 24.03.0 initialization... 00:16:09.447 [2024-10-30 09:50:47.816528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88114 ] 00:16:09.447 [2024-10-30 09:50:47.972667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.708 [2024-10-30 09:50:48.083648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.708 [2024-10-30 09:50:48.083765] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:09.709 [2024-10-30 09:50:48.083785] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:09.709 [2024-10-30 09:50:48.083802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:09.709 00:16:09.709 real 0m0.526s 00:16:09.709 user 0m0.325s 00:16:09.709 sys 0m0.096s 00:16:09.709 09:50:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.709 ************************************ 00:16:09.709 END TEST bdev_json_nonarray 00:16:09.709 ************************************ 00:16:09.709 09:50:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:09.970 09:50:48 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:09.970 00:16:09.970 real 0m39.627s 00:16:09.970 user 0m55.070s 00:16:09.970 sys 0m3.525s 00:16:09.970 09:50:48 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.970 09:50:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:09.970 ************************************ 00:16:09.970 END TEST blockdev_raid5f 00:16:09.970 ************************************ 00:16:09.970 09:50:48 -- spdk/autotest.sh@194 -- # uname -s 00:16:09.970 09:50:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:09.970 09:50:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.970 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:16:09.970 09:50:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:09.970 09:50:48 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:09.970 09:50:48 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:09.970 09:50:48 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:09.970 09:50:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.970 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:16:09.970 09:50:48 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:09.970 09:50:48 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:16:09.970 09:50:48 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:16:09.970 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.351 INFO: APP EXITING 00:16:11.351 INFO: killing all VMs 00:16:11.351 INFO: killing vhost app 00:16:11.351 INFO: EXIT DONE 00:16:11.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:11.612 Waiting for block devices as requested 00:16:11.612 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.612 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:12.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:12.611 Cleaning 00:16:12.611 Removing: /var/run/dpdk/spdk0/config 00:16:12.611 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:12.611 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:12.611 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:12.611 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:12.611 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:12.611 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:12.611 Removing: /dev/shm/spdk_tgt_trace.pid56207 00:16:12.611 Removing: /var/run/dpdk/spdk0 00:16:12.611 Removing: /var/run/dpdk/spdk_pid55999 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56207 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56419 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56518 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56557 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56680 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56698 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56897 00:16:12.611 Removing: /var/run/dpdk/spdk_pid56990 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57086 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57191 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57283 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57328 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57359 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57435 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57541 00:16:12.611 Removing: /var/run/dpdk/spdk_pid57977 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58041 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58093 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58109 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58217 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58233 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58341 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58352 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58410 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58428 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58487 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58504 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58665 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58701 00:16:12.611 Removing: /var/run/dpdk/spdk_pid58785 00:16:12.611 Removing: /var/run/dpdk/spdk_pid60038 00:16:12.611 Removing: /var/run/dpdk/spdk_pid60233 00:16:12.611 Removing: /var/run/dpdk/spdk_pid60368 00:16:12.611 Removing: /var/run/dpdk/spdk_pid60978 00:16:12.611 Removing: /var/run/dpdk/spdk_pid61173 00:16:12.611 Removing: /var/run/dpdk/spdk_pid61313 00:16:12.611 Removing: /var/run/dpdk/spdk_pid61918 00:16:12.611 Removing: /var/run/dpdk/spdk_pid62233 00:16:12.611 Removing: /var/run/dpdk/spdk_pid62368 00:16:12.611 Removing: /var/run/dpdk/spdk_pid63681 00:16:12.611 Removing: /var/run/dpdk/spdk_pid63923 00:16:12.611 Removing: /var/run/dpdk/spdk_pid64058 00:16:12.611 Removing: /var/run/dpdk/spdk_pid65382 00:16:12.611 Removing: /var/run/dpdk/spdk_pid65613 00:16:12.611 Removing: /var/run/dpdk/spdk_pid65753 00:16:12.611 Removing: /var/run/dpdk/spdk_pid67072 00:16:12.611 Removing: /var/run/dpdk/spdk_pid67501 00:16:12.611 Removing: /var/run/dpdk/spdk_pid67630 00:16:12.611 Removing: /var/run/dpdk/spdk_pid69049 00:16:12.611 Removing: /var/run/dpdk/spdk_pid69297 00:16:12.611 Removing: /var/run/dpdk/spdk_pid69432 00:16:12.611 Removing: /var/run/dpdk/spdk_pid70839 00:16:12.611 Removing: /var/run/dpdk/spdk_pid71082 00:16:12.611 Removing: /var/run/dpdk/spdk_pid71221 00:16:12.611 Removing: /var/run/dpdk/spdk_pid72619 00:16:12.611 Removing: /var/run/dpdk/spdk_pid73084 00:16:12.611 Removing: /var/run/dpdk/spdk_pid73213 00:16:12.611 Removing: /var/run/dpdk/spdk_pid73346 00:16:12.611 Removing: /var/run/dpdk/spdk_pid73757 00:16:12.611 Removing: /var/run/dpdk/spdk_pid74458 00:16:12.611 Removing: /var/run/dpdk/spdk_pid74832 00:16:12.611 Removing: /var/run/dpdk/spdk_pid75493 00:16:12.611 Removing: /var/run/dpdk/spdk_pid75925 00:16:12.611 Removing: /var/run/dpdk/spdk_pid76651 00:16:12.611 Removing: /var/run/dpdk/spdk_pid77049 00:16:12.611 Removing: /var/run/dpdk/spdk_pid78925 00:16:12.611 Removing: /var/run/dpdk/spdk_pid79341 00:16:12.611 Removing: /var/run/dpdk/spdk_pid79765 00:16:12.611 Removing: /var/run/dpdk/spdk_pid81757 00:16:12.611 Removing: /var/run/dpdk/spdk_pid82217 00:16:12.611 Removing: /var/run/dpdk/spdk_pid82716 00:16:12.612 Removing: /var/run/dpdk/spdk_pid83751 00:16:12.612 Removing: /var/run/dpdk/spdk_pid84053 00:16:12.612 Removing: /var/run/dpdk/spdk_pid84951 00:16:12.612 Removing: /var/run/dpdk/spdk_pid85259 00:16:12.612 Removing: /var/run/dpdk/spdk_pid86165 00:16:12.612 Removing: /var/run/dpdk/spdk_pid86471 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87125 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87377 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87432 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87470 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87693 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87866 00:16:12.612 Removing: /var/run/dpdk/spdk_pid87953 00:16:12.612 Removing: /var/run/dpdk/spdk_pid88046 00:16:12.612 Removing: /var/run/dpdk/spdk_pid88088 00:16:12.612 Removing: /var/run/dpdk/spdk_pid88114 00:16:12.612 Clean 00:16:12.612 09:50:51 -- common/autotest_common.sh@1451 -- # return 0 00:16:12.612 09:50:51 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:16:12.612 09:50:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.612 09:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.872 09:50:51 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:16:12.872 09:50:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.872 09:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.872 09:50:51 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:12.872 09:50:51 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:12.872 09:50:51 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:12.872 09:50:51 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:16:12.872 09:50:51 -- spdk/autotest.sh@394 -- # hostname 00:16:12.872 09:50:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:12.872 geninfo: WARNING: invalid characters removed from testname! 00:16:39.430 09:51:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:39.430 09:51:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:39.691 09:51:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:42.997 09:51:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:44.932 09:51:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:47.502 09:51:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:50.057 09:51:28 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:16:50.057 09:51:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:16:50.057 09:51:28 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:16:50.057 09:51:28 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:16:50.057 09:51:28 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:16:50.057 09:51:28 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:50.057 + [[ -n 5002 ]] 00:16:50.057 + sudo kill 5002 00:16:50.068 [Pipeline] } 00:16:50.089 [Pipeline] // timeout 00:16:50.094 [Pipeline] } 00:16:50.110 [Pipeline] // stage 00:16:50.116 [Pipeline] } 00:16:50.130 [Pipeline] // catchError 00:16:50.138 [Pipeline] stage 00:16:50.141 [Pipeline] { (Stop VM) 00:16:50.156 [Pipeline] sh 00:16:50.439 + vagrant halt 00:16:52.984 ==> default: Halting domain... 00:16:57.191 [Pipeline] sh 00:16:57.472 + vagrant destroy -f 00:17:00.016 ==> default: Removing domain... 00:17:00.029 [Pipeline] sh 00:17:00.313 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:00.324 [Pipeline] } 00:17:00.340 [Pipeline] // stage 00:17:00.346 [Pipeline] } 00:17:00.362 [Pipeline] // dir 00:17:00.368 [Pipeline] } 00:17:00.383 [Pipeline] // wrap 00:17:00.389 [Pipeline] } 00:17:00.403 [Pipeline] // catchError 00:17:00.413 [Pipeline] stage 00:17:00.415 [Pipeline] { (Epilogue) 00:17:00.430 [Pipeline] sh 00:17:00.719 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:05.014 [Pipeline] catchError 00:17:05.016 [Pipeline] { 00:17:05.033 [Pipeline] sh 00:17:05.324 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:05.324 Artifacts sizes are good 00:17:05.335 [Pipeline] } 00:17:05.349 [Pipeline] // catchError 00:17:05.360 [Pipeline] archiveArtifacts 00:17:05.367 Archiving artifacts 00:17:05.459 [Pipeline] cleanWs 00:17:05.470 [WS-CLEANUP] Deleting project workspace... 00:17:05.471 [WS-CLEANUP] Deferred wipeout is used... 00:17:05.478 [WS-CLEANUP] done 00:17:05.480 [Pipeline] } 00:17:05.495 [Pipeline] // stage 00:17:05.500 [Pipeline] } 00:17:05.517 [Pipeline] // node 00:17:05.521 [Pipeline] End of Pipeline 00:17:05.573 Finished: SUCCESS